Test Report: Hyper-V_Windows 16573

                    
                      2f0304e5caeb910cf6b713a3408f4279364136e7:2023-05-24:29404
                    
                

Test fail (6/300)

Order failed test Duration
205 TestMultiNode/serial/PingHostFrom2Pods 39.06
211 TestMultiNode/serial/RestartKeepsNodes 358.21
225 TestRunningBinaryUpgrade 383.47
240 TestNoKubernetes/serial/StartWithK8s 317.38
241 TestPause/serial/SecondStartNoReconfiguration 227.77
254 TestStoppedBinaryUpgrade/Upgrade 405.9
x
+
TestMultiNode/serial/PingHostFrom2Pods (39.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-9t5bp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-9t5bp -- sh -c "ping -c 1 172.27.128.1"
E0524 19:30:08.995438    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-9t5bp -- sh -c "ping -c 1 172.27.128.1": exit status 1 (10.5028563s)

                                                
                                                
-- stdout --
	PING 172.27.128.1 (172.27.128.1): 56 data bytes
	
	--- 172.27.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (172.27.128.1) from pod (busybox-67b7f59bb-9t5bp): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-tdzj2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-tdzj2 -- sh -c "ping -c 1 172.27.128.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-tdzj2 -- sh -c "ping -c 1 172.27.128.1": exit status 1 (10.5091741s)

                                                
                                                
-- stdout --
	PING 172.27.128.1 (172.27.128.1): 56 data bytes
	
	--- 172.27.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (172.27.128.1) from pod (busybox-67b7f59bb-tdzj2): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-237000 -n multinode-237000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-237000 -n multinode-237000: (5.1609658s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 logs -n 25: (4.6347594s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-742700 ssh -- ls                    | mount-start-2-742700 | minikube1\jenkins | v1.30.1 | 24 May 23 19:23 UTC | 24 May 23 19:23 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-742700                           | mount-start-1-742700 | minikube1\jenkins | v1.30.1 | 24 May 23 19:23 UTC | 24 May 23 19:23 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-742700 ssh -- ls                    | mount-start-2-742700 | minikube1\jenkins | v1.30.1 | 24 May 23 19:23 UTC | 24 May 23 19:23 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-742700                           | mount-start-2-742700 | minikube1\jenkins | v1.30.1 | 24 May 23 19:23 UTC | 24 May 23 19:23 UTC |
	| start   | -p mount-start-2-742700                           | mount-start-2-742700 | minikube1\jenkins | v1.30.1 | 24 May 23 19:23 UTC | 24 May 23 19:24 UTC |
	| mount   | C:\Users\jenkins.minikube1:/minikube-host         | mount-start-2-742700 | minikube1\jenkins | v1.30.1 | 24 May 23 19:24 UTC |                     |
	|         | --profile mount-start-2-742700 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-742700 ssh -- ls                    | mount-start-2-742700 | minikube1\jenkins | v1.30.1 | 24 May 23 19:24 UTC | 24 May 23 19:25 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-742700                           | mount-start-2-742700 | minikube1\jenkins | v1.30.1 | 24 May 23 19:25 UTC | 24 May 23 19:25 UTC |
	| delete  | -p mount-start-1-742700                           | mount-start-1-742700 | minikube1\jenkins | v1.30.1 | 24 May 23 19:25 UTC | 24 May 23 19:25 UTC |
	| start   | -p multinode-237000                               | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:25 UTC | 24 May 23 19:29 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- apply -f                   | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- rollout                    | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- get pods -o                | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- get pods -o                | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- exec                       | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | busybox-67b7f59bb-9t5bp --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- exec                       | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | busybox-67b7f59bb-tdzj2 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- exec                       | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | busybox-67b7f59bb-9t5bp --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- exec                       | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | busybox-67b7f59bb-tdzj2 --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- exec                       | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | busybox-67b7f59bb-9t5bp -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- exec                       | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | busybox-67b7f59bb-tdzj2 -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- get pods -o                | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- exec                       | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC | 24 May 23 19:29 UTC |
	|         | busybox-67b7f59bb-9t5bp                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- exec                       | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:29 UTC |                     |
	|         | busybox-67b7f59bb-9t5bp -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.128.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- exec                       | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:30 UTC | 24 May 23 19:30 UTC |
	|         | busybox-67b7f59bb-tdzj2                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-237000 -- exec                       | multinode-237000     | minikube1\jenkins | v1.30.1 | 24 May 23 19:30 UTC |                     |
	|         | busybox-67b7f59bb-tdzj2 -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.27.128.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 19:25:18
	Running on machine: minikube1
	Binary: Built with gc go1.20.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 19:25:18.283819    2624 out.go:296] Setting OutFile to fd 696 ...
	I0524 19:25:18.351850    2624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:25:18.351850    2624 out.go:309] Setting ErrFile to fd 920...
	I0524 19:25:18.351850    2624 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:25:18.377853    2624 out.go:303] Setting JSON to false
	I0524 19:25:18.381473    2624 start.go:125] hostinfo: {"hostname":"minikube1","uptime":4831,"bootTime":1684951486,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 19:25:18.381473    2624 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 19:25:18.386900    2624 out.go:177] * [multinode-237000] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 19:25:18.391227    2624 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:25:18.390622    2624 notify.go:220] Checking for updates...
	I0524 19:25:18.393924    2624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 19:25:18.396801    2624 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 19:25:18.398585    2624 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 19:25:18.401887    2624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 19:25:18.404219    2624 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 19:25:20.101561    2624 out.go:177] * Using the hyperv driver based on user configuration
	I0524 19:25:20.107364    2624 start.go:295] selected driver: hyperv
	I0524 19:25:20.107364    2624 start.go:870] validating driver "hyperv" against <nil>
	I0524 19:25:20.107364    2624 start.go:881] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 19:25:20.160326    2624 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 19:25:20.162372    2624 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 19:25:20.162372    2624 cni.go:84] Creating CNI manager for ""
	I0524 19:25:20.162372    2624 cni.go:136] 0 nodes found, recommending kindnet
	I0524 19:25:20.162372    2624 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0524 19:25:20.162372    2624 start_flags.go:319] config:
	{Name:multinode-237000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:25:20.163118    2624 iso.go:125] acquiring lock: {Name:mk3b29db369ab0f922ac5eeb788beee87e18ec94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 19:25:20.171629    2624 out.go:177] * Starting control plane node multinode-237000 in cluster multinode-237000
	I0524 19:25:20.174732    2624 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:25:20.175759    2624 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0524 19:25:20.175759    2624 cache.go:57] Caching tarball of preloaded images
	I0524 19:25:20.176054    2624 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0524 19:25:20.176054    2624 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 19:25:20.176644    2624 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:25:20.176644    2624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json: {Name:mk54175c836c0cb35ba36fe27525b62eb1f09c93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:25:20.177552    2624 cache.go:195] Successfully downloaded all kic artifacts
	I0524 19:25:20.177552    2624 start.go:364] acquiring machines lock for multinode-237000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 19:25:20.178500    2624 start.go:368] acquired machines lock for "multinode-237000" in 948µs
	I0524 19:25:20.178660    2624 start.go:93] Provisioning new machine with config: &{Name:multinode-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 19:25:20.178660    2624 start.go:125] createHost starting for "" (driver="hyperv")
	I0524 19:25:20.183825    2624 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 19:25:20.183825    2624 start.go:159] libmachine.API.Create for "multinode-237000" (driver="hyperv")
	I0524 19:25:20.183825    2624 client.go:168] LocalClient.Create starting
	I0524 19:25:20.183825    2624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0524 19:25:20.183825    2624 main.go:141] libmachine: Decoding PEM data...
	I0524 19:25:20.183825    2624 main.go:141] libmachine: Parsing certificate...
	I0524 19:25:20.183825    2624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0524 19:25:20.183825    2624 main.go:141] libmachine: Decoding PEM data...
	I0524 19:25:20.183825    2624 main.go:141] libmachine: Parsing certificate...
	I0524 19:25:20.183825    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0524 19:25:20.647705    2624 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0524 19:25:20.647963    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:20.647963    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0524 19:25:21.316235    2624 main.go:141] libmachine: [stdout =====>] : False
	
	I0524 19:25:21.316235    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:21.316365    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0524 19:25:21.861418    2624 main.go:141] libmachine: [stdout =====>] : True
	
	I0524 19:25:21.861623    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:21.861623    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0524 19:25:23.449485    2624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0524 19:25:23.449485    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:23.451589    2624 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.30.1-1684536668-16501-amd64.iso...
	I0524 19:25:23.884208    2624 main.go:141] libmachine: Creating SSH key...
	I0524 19:25:24.377817    2624 main.go:141] libmachine: Creating VM...
	I0524 19:25:24.377817    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0524 19:25:25.831986    2624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0524 19:25:25.832163    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:25.832238    2624 main.go:141] libmachine: Using switch "Default Switch"
	I0524 19:25:25.832366    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0524 19:25:26.551274    2624 main.go:141] libmachine: [stdout =====>] : True
	
	I0524 19:25:26.551477    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:26.551477    2624 main.go:141] libmachine: Creating VHD
	I0524 19:25:26.551550    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0524 19:25:28.306049    2624 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 04C656C3-5399-4DAD-AB4D-7BD584A1B3EA
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0524 19:25:28.306049    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:28.306049    2624 main.go:141] libmachine: Writing magic tar header
	I0524 19:25:28.306263    2624 main.go:141] libmachine: Writing SSH key tar header
	I0524 19:25:28.315780    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0524 19:25:30.089408    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:30.089609    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:30.089609    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\disk.vhd' -SizeBytes 20000MB
	I0524 19:25:31.285621    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:31.285621    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:31.285621    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-237000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0524 19:25:33.271127    2624 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-237000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0524 19:25:33.271127    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:33.271127    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-237000 -DynamicMemoryEnabled $false
	I0524 19:25:34.123401    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:34.123401    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:34.123493    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-237000 -Count 2
	I0524 19:25:34.922859    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:34.922859    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:34.922859    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-237000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\boot2docker.iso'
	I0524 19:25:36.072313    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:36.072379    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:36.072379    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-237000 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\disk.vhd'
	I0524 19:25:37.341933    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:37.342264    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:37.342345    2624 main.go:141] libmachine: Starting VM...
	I0524 19:25:37.342345    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-237000
	I0524 19:25:39.139756    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:39.139756    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:39.139756    2624 main.go:141] libmachine: Waiting for host to start...
	I0524 19:25:39.139756    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:25:39.915121    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:25:39.915121    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:39.915121    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:25:41.011138    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:41.011138    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:42.011984    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:25:42.741800    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:25:42.741832    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:42.741937    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:25:43.805180    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:43.805491    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:44.809950    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:25:45.550187    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:25:45.550187    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:45.550187    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:25:46.596942    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:46.597204    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:47.611769    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:25:48.356316    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:25:48.372321    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:48.372321    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:25:49.449309    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:49.449573    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:50.451342    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:25:51.196673    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:25:51.196837    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:51.196930    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:25:52.226839    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:52.227089    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:53.231132    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:25:53.975332    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:25:53.975448    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:53.975521    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:25:54.989434    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:54.989811    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:55.990924    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:25:56.747448    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:25:56.747752    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:56.747821    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:25:57.765421    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:25:57.765731    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:58.769089    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:25:59.531899    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:25:59.531977    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:25:59.531977    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:00.586787    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:26:00.586824    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:01.589667    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:02.332165    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:02.332348    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:02.332348    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:03.397979    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:26:03.397979    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:04.400304    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:05.178724    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:05.178944    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:05.179053    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:06.288788    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:06.289222    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:06.289286    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:07.048933    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:07.048999    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:07.048999    2624 machine.go:88] provisioning docker machine ...
	I0524 19:26:07.049129    2624 buildroot.go:166] provisioning hostname "multinode-237000"
	I0524 19:26:07.049224    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:07.841565    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:07.841565    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:07.841662    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:08.982021    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:08.982445    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:08.986752    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:26:08.996967    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.130.107 22 <nil> <nil>}
	I0524 19:26:08.996967    2624 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-237000 && echo "multinode-237000" | sudo tee /etc/hostname
	I0524 19:26:09.181909    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-237000
	
	I0524 19:26:09.181909    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:09.965783    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:09.965783    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:09.965783    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:11.066380    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:11.066637    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:11.071242    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:26:11.072088    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.130.107 22 <nil> <nil>}
	I0524 19:26:11.072088    2624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-237000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-237000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-237000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 19:26:11.242030    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 19:26:11.242030    2624 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0524 19:26:11.242030    2624 buildroot.go:174] setting up certificates
	I0524 19:26:11.242030    2624 provision.go:83] configureAuth start
	I0524 19:26:11.242030    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:12.003222    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:12.003222    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:12.003222    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:13.125088    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:13.125088    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:13.125088    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:13.898793    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:13.898793    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:13.898861    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:15.026432    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:15.026683    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:15.026772    2624 provision.go:138] copyHostCerts
	I0524 19:26:15.026926    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0524 19:26:15.026926    2624 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 19:26:15.026926    2624 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 19:26:15.027664    2624 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 19:26:15.028469    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0524 19:26:15.028469    2624 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 19:26:15.028469    2624 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 19:26:15.029108    2624 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 19:26:15.029625    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0524 19:26:15.030157    2624 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 19:26:15.030157    2624 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 19:26:15.030316    2624 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 19:26:15.031021    2624 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-237000 san=[172.27.130.107 172.27.130.107 localhost 127.0.0.1 minikube multinode-237000]
	I0524 19:26:15.290653    2624 provision.go:172] copyRemoteCerts
	I0524 19:26:15.301597    2624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 19:26:15.301597    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:16.104934    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:16.104934    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:16.105018    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:17.206674    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:17.206674    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:17.206674    2624 sshutil.go:53] new ssh client: &{IP:172.27.130.107 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:26:17.333720    2624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.032124s)
	I0524 19:26:17.333720    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0524 19:26:17.333720    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 19:26:17.381992    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0524 19:26:17.381992    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0524 19:26:17.424118    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0524 19:26:17.424835    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 19:26:17.465980    2624 provision.go:86] duration metric: configureAuth took 6.2239519s
	I0524 19:26:17.465980    2624 buildroot.go:189] setting minikube options for container-runtime
	I0524 19:26:17.467634    2624 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:26:17.467785    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:18.250778    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:18.250778    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:18.250778    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:19.355398    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:19.355624    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:19.359817    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:26:19.360531    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.130.107 22 <nil> <nil>}
	I0524 19:26:19.360531    2624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 19:26:19.520467    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 19:26:19.520467    2624 buildroot.go:70] root file system type: tmpfs
	I0524 19:26:19.520811    2624 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 19:26:19.520844    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:20.335715    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:20.335715    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:20.335715    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:21.411573    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:21.411655    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:21.416085    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:26:21.416085    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.130.107 22 <nil> <nil>}
	I0524 19:26:21.417161    2624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 19:26:21.596917    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 19:26:21.597057    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:22.354641    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:22.354641    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:22.354641    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:23.425512    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:23.425512    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:23.429333    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:26:23.430827    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.130.107 22 <nil> <nil>}
	I0524 19:26:23.430984    2624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 19:26:24.607744    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 19:26:24.607744    2624 machine.go:91] provisioned docker machine in 17.5587523s
	I0524 19:26:24.607744    2624 client.go:171] LocalClient.Create took 1m4.4239454s
	I0524 19:26:24.607744    2624 start.go:167] duration metric: libmachine.API.Create for "multinode-237000" took 1m4.4239454s
	I0524 19:26:24.607744    2624 start.go:300] post-start starting for "multinode-237000" (driver="hyperv")
	I0524 19:26:24.607744    2624 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 19:26:24.617961    2624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 19:26:24.617961    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:25.393660    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:25.393660    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:25.393660    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:26.479909    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:26.479909    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:26.480342    2624 sshutil.go:53] new ssh client: &{IP:172.27.130.107 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:26:26.606146    2624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.9881862s)
	I0524 19:26:26.617857    2624 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 19:26:26.625026    2624 command_runner.go:130] > NAME=Buildroot
	I0524 19:26:26.625026    2624 command_runner.go:130] > VERSION=2021.02.12-1-g419828a-dirty
	I0524 19:26:26.625026    2624 command_runner.go:130] > ID=buildroot
	I0524 19:26:26.625026    2624 command_runner.go:130] > VERSION_ID=2021.02.12
	I0524 19:26:26.625026    2624 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0524 19:26:26.625026    2624 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 19:26:26.625026    2624 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 19:26:26.625624    2624 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 19:26:26.625882    2624 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 19:26:26.625882    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> /etc/ssl/certs/65602.pem
	I0524 19:26:26.637271    2624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 19:26:26.654093    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 19:26:26.697087    2624 start.go:303] post-start completed in 2.089343s
	I0524 19:26:26.699660    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:27.442883    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:27.442883    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:27.442883    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:28.557889    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:28.558084    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:28.558264    2624 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:26:28.561709    2624 start.go:128] duration metric: createHost completed in 1m8.3830758s
	I0524 19:26:28.562035    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:29.342773    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:29.342931    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:29.343010    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:30.415243    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:30.415522    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:30.419802    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:26:30.420548    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.130.107 22 <nil> <nil>}
	I0524 19:26:30.420548    2624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 19:26:30.577519    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684956390.575538560
	
	I0524 19:26:30.577615    2624 fix.go:207] guest clock: 1684956390.575538560
	I0524 19:26:30.577615    2624 fix.go:220] Guest: 2023-05-24 19:26:30.57553856 +0000 UTC Remote: 2023-05-24 19:26:28.561709 +0000 UTC m=+70.357503701 (delta=2.01382956s)
	I0524 19:26:30.577710    2624 fix.go:191] guest clock delta is within tolerance: 2.01382956s
	I0524 19:26:30.577740    2624 start.go:83] releasing machines lock for "multinode-237000", held for 1m10.3992366s
	I0524 19:26:30.577740    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:31.373655    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:31.373890    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:31.373890    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:32.510225    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:32.510330    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:32.513244    2624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 19:26:32.513244    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:32.522636    2624 ssh_runner.go:195] Run: cat /version.json
	I0524 19:26:32.522636    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:26:33.320218    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:33.320423    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:33.320423    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:26:33.320516    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:33.320516    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:33.320516    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:26:34.515070    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:34.515182    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:34.515769    2624 sshutil.go:53] new ssh client: &{IP:172.27.130.107 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:26:34.545217    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:26:34.546079    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:26:34.546383    2624 sshutil.go:53] new ssh client: &{IP:172.27.130.107 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:26:34.703091    2624 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0524 19:26:34.703091    2624 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1898477s)
	I0524 19:26:34.703303    2624 command_runner.go:130] > {"iso_version": "v1.30.1-1684536668-16501", "kicbase_version": "v0.0.39-1684523789-16533", "minikube_version": "v1.30.1", "commit": "4302bbdfbbd8ec304b126be6025f52f2ccb3add9"}
	I0524 19:26:34.703373    2624 ssh_runner.go:235] Completed: cat /version.json: (2.1807378s)
	I0524 19:26:34.713093    2624 ssh_runner.go:195] Run: systemctl --version
	I0524 19:26:34.721765    2624 command_runner.go:130] > systemd 247 (247)
	I0524 19:26:34.721765    2624 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0524 19:26:34.732421    2624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0524 19:26:34.740721    2624 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0524 19:26:34.740981    2624 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 19:26:34.754446    2624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 19:26:34.778796    2624 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0524 19:26:34.778796    2624 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 19:26:34.779561    2624 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:26:34.786481    2624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 19:26:34.824315    2624 docker.go:633] Got preloaded images: 
	I0524 19:26:34.824315    2624 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0524 19:26:34.834271    2624 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 19:26:34.850856    2624 command_runner.go:139] > {"Repositories":{}}
	I0524 19:26:34.860945    2624 ssh_runner.go:195] Run: which lz4
	I0524 19:26:34.866414    2624 command_runner.go:130] > /usr/bin/lz4
	I0524 19:26:34.866466    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0524 19:26:34.876695    2624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 19:26:34.881643    2624 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 19:26:34.881643    2624 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 19:26:34.881643    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (412256110 bytes)
	I0524 19:26:37.396986    2624 docker.go:597] Took 2.530448 seconds to copy over tarball
	I0524 19:26:37.407140    2624 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 19:26:46.300035    2624 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (8.8928481s)
	I0524 19:26:46.300077    2624 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 19:26:46.367909    2624 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 19:26:46.386421    2624 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.7-0":"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83":"sha256:86b6af7dd652c1b38118be1c338e
9354b33469e69a218f7e290a0ca5304ad681"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.27.2":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370","registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.27.2":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12","registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.27.2":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d
32174dc13e7dee"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.27.2":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0524 19:26:46.387001    2624 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0524 19:26:46.428958    2624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:26:46.598478    2624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 19:26:49.282486    2624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6840099s)
	I0524 19:26:49.282680    2624 start.go:481] detecting cgroup driver to use...
	I0524 19:26:49.282919    2624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:26:49.306629    2624 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0524 19:26:49.315218    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 19:26:49.339965    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 19:26:49.355731    2624 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 19:26:49.363827    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 19:26:49.390443    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:26:49.416881    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 19:26:49.445083    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:26:49.469726    2624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 19:26:49.500921    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 19:26:49.523885    2624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 19:26:49.540707    2624 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0524 19:26:49.550374    2624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 19:26:49.574422    2624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:26:49.753695    2624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 19:26:49.781860    2624 start.go:481] detecting cgroup driver to use...
	I0524 19:26:49.795334    2624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 19:26:49.815482    2624 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0524 19:26:49.816467    2624 command_runner.go:130] > [Unit]
	I0524 19:26:49.816467    2624 command_runner.go:130] > Description=Docker Application Container Engine
	I0524 19:26:49.816467    2624 command_runner.go:130] > Documentation=https://docs.docker.com
	I0524 19:26:49.816538    2624 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0524 19:26:49.816538    2624 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0524 19:26:49.816538    2624 command_runner.go:130] > StartLimitBurst=3
	I0524 19:26:49.816538    2624 command_runner.go:130] > StartLimitIntervalSec=60
	I0524 19:26:49.816538    2624 command_runner.go:130] > [Service]
	I0524 19:26:49.816538    2624 command_runner.go:130] > Type=notify
	I0524 19:26:49.816538    2624 command_runner.go:130] > Restart=on-failure
	I0524 19:26:49.816664    2624 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0524 19:26:49.816664    2624 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0524 19:26:49.816664    2624 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0524 19:26:49.816724    2624 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0524 19:26:49.816724    2624 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0524 19:26:49.816724    2624 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0524 19:26:49.816724    2624 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0524 19:26:49.816779    2624 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0524 19:26:49.816779    2624 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0524 19:26:49.816779    2624 command_runner.go:130] > ExecStart=
	I0524 19:26:49.816779    2624 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0524 19:26:49.816779    2624 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0524 19:26:49.816779    2624 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0524 19:26:49.816779    2624 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0524 19:26:49.816858    2624 command_runner.go:130] > LimitNOFILE=infinity
	I0524 19:26:49.816858    2624 command_runner.go:130] > LimitNPROC=infinity
	I0524 19:26:49.816858    2624 command_runner.go:130] > LimitCORE=infinity
	I0524 19:26:49.816858    2624 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0524 19:26:49.816858    2624 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0524 19:26:49.816858    2624 command_runner.go:130] > TasksMax=infinity
	I0524 19:26:49.816912    2624 command_runner.go:130] > TimeoutStartSec=0
	I0524 19:26:49.816912    2624 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0524 19:26:49.816912    2624 command_runner.go:130] > Delegate=yes
	I0524 19:26:49.816912    2624 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0524 19:26:49.816912    2624 command_runner.go:130] > KillMode=process
	I0524 19:26:49.816912    2624 command_runner.go:130] > [Install]
	I0524 19:26:49.816970    2624 command_runner.go:130] > WantedBy=multi-user.target
	I0524 19:26:49.828273    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 19:26:49.864139    2624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 19:26:49.904898    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 19:26:49.936655    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:26:49.970444    2624 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 19:26:50.029557    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:26:50.052196    2624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:26:50.083412    2624 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0524 19:26:50.093134    2624 ssh_runner.go:195] Run: which cri-dockerd
	I0524 19:26:50.099112    2624 command_runner.go:130] > /usr/bin/cri-dockerd
	I0524 19:26:50.111950    2624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 19:26:50.132101    2624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 19:26:50.180085    2624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 19:26:50.351352    2624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 19:26:50.503552    2624 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 19:26:50.503552    2624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 19:26:50.544505    2624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:26:50.720831    2624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 19:26:52.260982    2624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5395799s)
	I0524 19:26:52.272763    2624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 19:26:52.462218    2624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 19:26:52.655359    2624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 19:26:52.838714    2624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:26:53.027433    2624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 19:26:53.067678    2624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:26:53.253254    2624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 19:26:53.369430    2624 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 19:26:53.380035    2624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 19:26:53.395852    2624 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0524 19:26:53.395905    2624 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0524 19:26:53.395905    2624 command_runner.go:130] > Device: 16h/22d	Inode: 919         Links: 1
	I0524 19:26:53.395972    2624 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0524 19:26:53.395972    2624 command_runner.go:130] > Access: 2023-05-24 19:26:53.274495549 +0000
	I0524 19:26:53.395972    2624 command_runner.go:130] > Modify: 2023-05-24 19:26:53.274495549 +0000
	I0524 19:26:53.396021    2624 command_runner.go:130] > Change: 2023-05-24 19:26:53.279495506 +0000
	I0524 19:26:53.396021    2624 command_runner.go:130] >  Birth: -
	I0524 19:26:53.396051    2624 start.go:549] Will wait 60s for crictl version
	I0524 19:26:53.405970    2624 ssh_runner.go:195] Run: which crictl
	I0524 19:26:53.411961    2624 command_runner.go:130] > /usr/bin/crictl
	I0524 19:26:53.421818    2624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 19:26:53.483996    2624 command_runner.go:130] > Version:  0.1.0
	I0524 19:26:53.483996    2624 command_runner.go:130] > RuntimeName:  docker
	I0524 19:26:53.483996    2624 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0524 19:26:53.483996    2624 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0524 19:26:53.483996    2624 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 19:26:53.491029    2624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 19:26:53.531625    2624 command_runner.go:130] > 20.10.23
	I0524 19:26:53.538044    2624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 19:26:53.580751    2624 command_runner.go:130] > 20.10.23
	I0524 19:26:53.584911    2624 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 19:26:53.585040    2624 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 19:26:53.591459    2624 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 19:26:53.591459    2624 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 19:26:53.591459    2624 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 19:26:53.591459    2624 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 19:26:53.594192    2624 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 19:26:53.594192    2624 ip.go:210] interface addr: 172.27.128.1/20
	I0524 19:26:53.603996    2624 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 19:26:53.610868    2624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:26:53.632258    2624 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:26:53.639402    2624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 19:26:53.676827    2624 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0524 19:26:53.676827    2624 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0524 19:26:53.676827    2624 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0524 19:26:53.676827    2624 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0524 19:26:53.676827    2624 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0524 19:26:53.676827    2624 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0524 19:26:53.676827    2624 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0524 19:26:53.676827    2624 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:26:53.676827    2624 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 19:26:53.676827    2624 docker.go:563] Images already preloaded, skipping extraction
	I0524 19:26:53.683099    2624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 19:26:53.717023    2624 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0524 19:26:53.717023    2624 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0524 19:26:53.717023    2624 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0524 19:26:53.717023    2624 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0524 19:26:53.717023    2624 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0524 19:26:53.717023    2624 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0524 19:26:53.717023    2624 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0524 19:26:53.717023    2624 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:26:53.717023    2624 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 19:26:53.717023    2624 cache_images.go:84] Images are preloaded, skipping loading
	I0524 19:26:53.725548    2624 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 19:26:53.769326    2624 command_runner.go:130] > cgroupfs
	I0524 19:26:53.769524    2624 cni.go:84] Creating CNI manager for ""
	I0524 19:26:53.769524    2624 cni.go:136] 1 nodes found, recommending kindnet
	I0524 19:26:53.769608    2624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 19:26:53.769608    2624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.130.107 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-237000 NodeName:multinode-237000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.130.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.130.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 19:26:53.769725    2624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.130.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-237000"
	  kubeletExtraArgs:
	    node-ip: 172.27.130.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.130.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 19:26:53.769725    2624 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-237000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.130.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 19:26:53.779470    2624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 19:26:53.797357    2624 command_runner.go:130] > kubeadm
	I0524 19:26:53.797357    2624 command_runner.go:130] > kubectl
	I0524 19:26:53.797416    2624 command_runner.go:130] > kubelet
	I0524 19:26:53.797416    2624 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 19:26:53.806812    2624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 19:26:53.822439    2624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0524 19:26:53.848592    2624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 19:26:53.880672    2624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0524 19:26:53.919932    2624 ssh_runner.go:195] Run: grep 172.27.130.107	control-plane.minikube.internal$ /etc/hosts
	I0524 19:26:53.925222    2624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.130.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:26:53.946584    2624 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000 for IP: 172.27.130.107
	I0524 19:26:53.946664    2624 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:26:53.947354    2624 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 19:26:53.947354    2624 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 19:26:53.948221    2624 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\client.key
	I0524 19:26:53.948221    2624 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\client.crt with IP's: []
	I0524 19:26:54.027083    2624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\client.crt ...
	I0524 19:26:54.027083    2624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\client.crt: {Name:mk8533cee2aa3e523481607de9ae826a9a9dc898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:26:54.028754    2624 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\client.key ...
	I0524 19:26:54.028754    2624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\client.key: {Name:mk880e924233a478126d83a83fb6904618d37476 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:26:54.029538    2624 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key.ecfba3d2
	I0524 19:26:54.029538    2624 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt.ecfba3d2 with IP's: [172.27.130.107 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 19:26:54.193211    2624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt.ecfba3d2 ...
	I0524 19:26:54.193211    2624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt.ecfba3d2: {Name:mkc6c78f9bc218ee27a31f183401a2b6a92f372f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:26:54.193921    2624 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key.ecfba3d2 ...
	I0524 19:26:54.193921    2624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key.ecfba3d2: {Name:mk5b1cb91ca9e8823b396bd2ade8707de2bddae8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:26:54.194922    2624 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt.ecfba3d2 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt
	I0524 19:26:54.204947    2624 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key.ecfba3d2 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key
	I0524 19:26:54.207960    2624 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.key
	I0524 19:26:54.208392    2624 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.crt with IP's: []
	I0524 19:26:54.434306    2624 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.crt ...
	I0524 19:26:54.434306    2624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.crt: {Name:mk23853b66780730ba84a7a6025d9f471386bd82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:26:54.434816    2624 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.key ...
	I0524 19:26:54.434816    2624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.key: {Name:mkc4fff32d4d7b6f6cc1ab114af72eb407946899 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:26:54.435816    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0524 19:26:54.437005    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0524 19:26:54.437145    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0524 19:26:54.445440    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0524 19:26:54.445440    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0524 19:26:54.446442    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0524 19:26:54.446442    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0524 19:26:54.446442    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0524 19:26:54.446442    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 19:26:54.446442    2624 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 19:26:54.446442    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 19:26:54.446442    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 19:26:54.447885    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 19:26:54.448128    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 19:26:54.448319    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 19:26:54.448319    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:26:54.449336    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem -> /usr/share/ca-certificates/6560.pem
	I0524 19:26:54.449336    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> /usr/share/ca-certificates/65602.pem
	I0524 19:26:54.450932    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 19:26:54.501010    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 19:26:54.541728    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 19:26:54.585679    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 19:26:54.627097    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 19:26:54.667076    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 19:26:54.709275    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 19:26:54.753441    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 19:26:54.796785    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 19:26:54.840215    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 19:26:54.883367    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 19:26:54.928508    2624 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 19:26:54.972634    2624 ssh_runner.go:195] Run: openssl version
	I0524 19:26:54.980704    2624 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0524 19:26:54.990089    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 19:26:55.021969    2624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:26:55.028852    2624 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:26:55.029011    2624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:26:55.039563    2624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:26:55.048265    2624 command_runner.go:130] > b5213941
	I0524 19:26:55.057332    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 19:26:55.085107    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 19:26:55.113216    2624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 19:26:55.119994    2624 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 19:26:55.120220    2624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 19:26:55.130131    2624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 19:26:55.139604    2624 command_runner.go:130] > 51391683
	I0524 19:26:55.149762    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 19:26:55.179705    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 19:26:55.213151    2624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 19:26:55.219316    2624 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 19:26:55.219979    2624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 19:26:55.228935    2624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 19:26:55.237948    2624 command_runner.go:130] > 3ec20f2e
	I0524 19:26:55.248266    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 19:26:55.275144    2624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 19:26:55.282096    2624 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 19:26:55.282096    2624 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 19:26:55.282096    2624 kubeadm.go:404] StartCluster: {Name:multinode-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.130.107 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSi
ze:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:26:55.289795    2624 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 19:26:55.334113    2624 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 19:26:55.350591    2624 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0524 19:26:55.350652    2624 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0524 19:26:55.350652    2624 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0524 19:26:55.360421    2624 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 19:26:55.389840    2624 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 19:26:55.410632    2624 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0524 19:26:55.410942    2624 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0524 19:26:55.410942    2624 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0524 19:26:55.410942    2624 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 19:26:55.411205    2624 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 19:26:55.411317    2624 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 19:26:56.209658    2624 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 19:26:56.209658    2624 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 19:26:56.210859    2624 kubeadm.go:322] W0524 19:26:56.208245    1502 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 19:26:56.210933    2624 command_runner.go:130] ! W0524 19:26:56.208245    1502 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 19:26:59.802846    2624 kubeadm.go:322] W0524 19:26:59.800226    1502 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 19:26:59.802846    2624 command_runner.go:130] ! W0524 19:26:59.800226    1502 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 19:27:12.022441    2624 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 19:27:12.022441    2624 command_runner.go:130] > [init] Using Kubernetes version: v1.27.2
	I0524 19:27:12.022441    2624 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 19:27:12.022441    2624 command_runner.go:130] > [preflight] Running pre-flight checks
	I0524 19:27:12.022441    2624 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 19:27:12.022441    2624 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 19:27:12.022973    2624 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 19:27:12.022973    2624 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 19:27:12.023173    2624 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 19:27:12.023173    2624 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 19:27:12.023229    2624 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 19:27:12.026436    2624 out.go:204]   - Generating certificates and keys ...
	I0524 19:27:12.023229    2624 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 19:27:12.027039    2624 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 19:27:12.026972    2624 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0524 19:27:12.027258    2624 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 19:27:12.027258    2624 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0524 19:27:12.027492    2624 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 19:27:12.027492    2624 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 19:27:12.027756    2624 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0524 19:27:12.027756    2624 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 19:27:12.027874    2624 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0524 19:27:12.027874    2624 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 19:27:12.027874    2624 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 19:27:12.027874    2624 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0524 19:27:12.027874    2624 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 19:27:12.027874    2624 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0524 19:27:12.028498    2624 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-237000] and IPs [172.27.130.107 127.0.0.1 ::1]
	I0524 19:27:12.028588    2624 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-237000] and IPs [172.27.130.107 127.0.0.1 ::1]
	I0524 19:27:12.028729    2624 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 19:27:12.028729    2624 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0524 19:27:12.029038    2624 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-237000] and IPs [172.27.130.107 127.0.0.1 ::1]
	I0524 19:27:12.029038    2624 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-237000] and IPs [172.27.130.107 127.0.0.1 ::1]
	I0524 19:27:12.029350    2624 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 19:27:12.029350    2624 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 19:27:12.029471    2624 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 19:27:12.029471    2624 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 19:27:12.029668    2624 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0524 19:27:12.029726    2624 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 19:27:12.029897    2624 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 19:27:12.029897    2624 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 19:27:12.030056    2624 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 19:27:12.030056    2624 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 19:27:12.030215    2624 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 19:27:12.030215    2624 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 19:27:12.030462    2624 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 19:27:12.030462    2624 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 19:27:12.030725    2624 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 19:27:12.030725    2624 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 19:27:12.030725    2624 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 19:27:12.030725    2624 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 19:27:12.030725    2624 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 19:27:12.030725    2624 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 19:27:12.031272    2624 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0524 19:27:12.031329    2624 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 19:27:12.031329    2624 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 19:27:12.031329    2624 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 19:27:12.034920    2624 out.go:204]   - Booting up control plane ...
	I0524 19:27:12.034920    2624 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 19:27:12.034920    2624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 19:27:12.034920    2624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 19:27:12.035479    2624 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 19:27:12.035704    2624 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 19:27:12.035704    2624 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 19:27:12.035704    2624 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 19:27:12.035704    2624 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 19:27:12.036286    2624 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 19:27:12.036286    2624 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 19:27:12.036409    2624 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.507001 seconds
	I0524 19:27:12.036409    2624 command_runner.go:130] > [apiclient] All control plane components are healthy after 10.507001 seconds
	I0524 19:27:12.036409    2624 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 19:27:12.036409    2624 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 19:27:12.037068    2624 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 19:27:12.037108    2624 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 19:27:12.037134    2624 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 19:27:12.037134    2624 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0524 19:27:12.037134    2624 command_runner.go:130] > [mark-control-plane] Marking the node multinode-237000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 19:27:12.037134    2624 kubeadm.go:322] [mark-control-plane] Marking the node multinode-237000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 19:27:12.037688    2624 command_runner.go:130] > [bootstrap-token] Using token: u4hytl.nvtfiyhgi6lj1i5c
	I0524 19:27:12.037688    2624 kubeadm.go:322] [bootstrap-token] Using token: u4hytl.nvtfiyhgi6lj1i5c
	I0524 19:27:12.040276    2624 out.go:204]   - Configuring RBAC rules ...
	I0524 19:27:12.040276    2624 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 19:27:12.040276    2624 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 19:27:12.040276    2624 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 19:27:12.040276    2624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 19:27:12.041332    2624 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 19:27:12.041400    2624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 19:27:12.041630    2624 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 19:27:12.041679    2624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 19:27:12.041931    2624 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 19:27:12.041931    2624 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 19:27:12.042113    2624 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 19:27:12.042113    2624 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 19:27:12.042507    2624 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 19:27:12.042507    2624 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 19:27:12.042659    2624 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0524 19:27:12.042659    2624 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 19:27:12.042659    2624 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0524 19:27:12.042659    2624 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 19:27:12.042659    2624 kubeadm.go:322] 
	I0524 19:27:12.042659    2624 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 19:27:12.042659    2624 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0524 19:27:12.042659    2624 kubeadm.go:322] 
	I0524 19:27:12.043216    2624 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 19:27:12.043216    2624 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0524 19:27:12.043216    2624 kubeadm.go:322] 
	I0524 19:27:12.043216    2624 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 19:27:12.043216    2624 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0524 19:27:12.043216    2624 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 19:27:12.043216    2624 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 19:27:12.043216    2624 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 19:27:12.043216    2624 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 19:27:12.043216    2624 kubeadm.go:322] 
	I0524 19:27:12.043216    2624 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 19:27:12.043216    2624 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0524 19:27:12.043769    2624 kubeadm.go:322] 
	I0524 19:27:12.043998    2624 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 19:27:12.043998    2624 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 19:27:12.044037    2624 kubeadm.go:322] 
	I0524 19:27:12.044098    2624 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0524 19:27:12.044138    2624 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 19:27:12.044138    2624 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 19:27:12.044339    2624 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 19:27:12.044481    2624 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 19:27:12.044481    2624 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 19:27:12.044519    2624 kubeadm.go:322] 
	I0524 19:27:12.044795    2624 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 19:27:12.044826    2624 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0524 19:27:12.044848    2624 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 19:27:12.044848    2624 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0524 19:27:12.044848    2624 kubeadm.go:322] 
	I0524 19:27:12.044848    2624 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token u4hytl.nvtfiyhgi6lj1i5c \
	I0524 19:27:12.044848    2624 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token u4hytl.nvtfiyhgi6lj1i5c \
	I0524 19:27:12.045500    2624 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 \
	I0524 19:27:12.045500    2624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 \
	I0524 19:27:12.045571    2624 kubeadm.go:322] 	--control-plane 
	I0524 19:27:12.045596    2624 command_runner.go:130] > 	--control-plane 
	I0524 19:27:12.045596    2624 kubeadm.go:322] 
	I0524 19:27:12.045596    2624 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0524 19:27:12.045596    2624 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 19:27:12.045596    2624 kubeadm.go:322] 
	I0524 19:27:12.045596    2624 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token u4hytl.nvtfiyhgi6lj1i5c \
	I0524 19:27:12.045596    2624 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token u4hytl.nvtfiyhgi6lj1i5c \
	I0524 19:27:12.046285    2624 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 
	I0524 19:27:12.046285    2624 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 
	I0524 19:27:12.046395    2624 cni.go:84] Creating CNI manager for ""
	I0524 19:27:12.046424    2624 cni.go:136] 1 nodes found, recommending kindnet
	I0524 19:27:12.051295    2624 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0524 19:27:12.063097    2624 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0524 19:27:12.075349    2624 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0524 19:27:12.075404    2624 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0524 19:27:12.075404    2624 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0524 19:27:12.075404    2624 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0524 19:27:12.075474    2624 command_runner.go:130] > Access: 2023-05-24 19:26:05.600064400 +0000
	I0524 19:27:12.075474    2624 command_runner.go:130] > Modify: 2023-05-20 04:10:39.000000000 +0000
	I0524 19:27:12.075474    2624 command_runner.go:130] > Change: 2023-05-24 19:25:55.818000000 +0000
	I0524 19:27:12.075500    2624 command_runner.go:130] >  Birth: -
	I0524 19:27:12.075500    2624 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0524 19:27:12.075500    2624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0524 19:27:12.125687    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0524 19:27:13.865294    2624 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0524 19:27:13.865294    2624 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0524 19:27:13.865294    2624 command_runner.go:130] > serviceaccount/kindnet created
	I0524 19:27:13.865294    2624 command_runner.go:130] > daemonset.apps/kindnet created
	I0524 19:27:13.865294    2624 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.7396076s)
	I0524 19:27:13.865294    2624 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 19:27:13.877231    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:13.877231    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=multinode-237000 minikube.k8s.io/updated_at=2023_05_24T19_27_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:13.883508    2624 command_runner.go:130] > -16
	I0524 19:27:13.883600    2624 ops.go:34] apiserver oom_adj: -16
	I0524 19:27:14.052374    2624 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0524 19:27:14.055237    2624 command_runner.go:130] > node/multinode-237000 labeled
	I0524 19:27:14.066649    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:14.203877    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:14.727684    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:14.857351    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:15.229111    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:15.357004    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:15.732862    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:15.871119    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:16.233373    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:16.369869    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:16.719977    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:16.865143    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:17.222407    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:17.375401    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:17.729167    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:17.855145    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:18.220031    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:18.386978    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:18.729880    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:18.852115    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:19.230285    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:19.352495    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:19.717729    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:19.848606    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:20.220467    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:20.366846    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:20.722153    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:20.877324    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:21.224424    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:21.369094    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:21.732734    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:21.871889    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:22.220314    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:22.399168    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:22.727876    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:22.890370    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:23.229118    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:23.379333    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:23.719487    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:23.968574    2624 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0524 19:27:24.225768    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 19:27:24.479238    2624 command_runner.go:130] > NAME      SECRETS   AGE
	I0524 19:27:24.479301    2624 command_runner.go:130] > default   0         0s
	I0524 19:27:24.479357    2624 kubeadm.go:1076] duration metric: took 10.6140682s to wait for elevateKubeSystemPrivileges.
	I0524 19:27:24.479357    2624 kubeadm.go:406] StartCluster complete in 29.1972738s
	I0524 19:27:24.479443    2624 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:27:24.479666    2624 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:27:24.480934    2624 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:27:24.481895    2624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 19:27:24.482606    2624 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0524 19:27:24.482790    2624 addons.go:66] Setting storage-provisioner=true in profile "multinode-237000"
	I0524 19:27:24.482790    2624 addons.go:228] Setting addon storage-provisioner=true in "multinode-237000"
	I0524 19:27:24.482790    2624 addons.go:66] Setting default-storageclass=true in profile "multinode-237000"
	I0524 19:27:24.482790    2624 host.go:66] Checking if "multinode-237000" exists ...
	I0524 19:27:24.482790    2624 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-237000"
	I0524 19:27:24.482790    2624 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:27:24.484059    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:27:24.484421    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:27:24.495058    2624 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:27:24.495910    2624 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.130.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:27:24.498462    2624 cert_rotation.go:137] Starting client certificate rotation controller
	I0524 19:27:24.499031    2624 round_trippers.go:463] GET https://172.27.130.107:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0524 19:27:24.499102    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:24.499102    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:24.499184    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:24.531600    2624 round_trippers.go:574] Response Status: 200 OK in 31 milliseconds
	I0524 19:27:24.531671    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:24.531671    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:24.531671    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:24.531671    2624 round_trippers.go:580]     Content-Length: 291
	I0524 19:27:24.531671    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:24 GMT
	I0524 19:27:24.531671    2624 round_trippers.go:580]     Audit-Id: 9cb13847-25a4-47b7-82f3-e40f4f909c92
	I0524 19:27:24.531671    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:24.531671    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:24.531671    2624 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9016559d-3c59-4f76-8961-1b5665cb8836","resourceVersion":"316","creationTimestamp":"2023-05-24T19:27:11Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0524 19:27:24.532461    2624 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9016559d-3c59-4f76-8961-1b5665cb8836","resourceVersion":"316","creationTimestamp":"2023-05-24T19:27:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0524 19:27:24.533012    2624 round_trippers.go:463] PUT https://172.27.130.107:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0524 19:27:24.533012    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:24.533012    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:24.533080    2624 round_trippers.go:473]     Content-Type: application/json
	I0524 19:27:24.533080    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:24.610119    2624 round_trippers.go:574] Response Status: 200 OK in 77 milliseconds
	I0524 19:27:24.610119    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:24.610689    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:24.610689    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:24.610689    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:24.610689    2624 round_trippers.go:580]     Content-Length: 291
	I0524 19:27:24.610689    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:24 GMT
	I0524 19:27:24.610775    2624 round_trippers.go:580]     Audit-Id: 05c3c85f-90e4-4ce9-8ca1-c79d034ae95c
	I0524 19:27:24.610818    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:24.613988    2624 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9016559d-3c59-4f76-8961-1b5665cb8836","resourceVersion":"334","creationTimestamp":"2023-05-24T19:27:11Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0524 19:27:24.903550    2624 command_runner.go:130] > apiVersion: v1
	I0524 19:27:24.903622    2624 command_runner.go:130] > data:
	I0524 19:27:24.903622    2624 command_runner.go:130] >   Corefile: |
	I0524 19:27:24.903622    2624 command_runner.go:130] >     .:53 {
	I0524 19:27:24.903622    2624 command_runner.go:130] >         errors
	I0524 19:27:24.903622    2624 command_runner.go:130] >         health {
	I0524 19:27:24.903622    2624 command_runner.go:130] >            lameduck 5s
	I0524 19:27:24.903622    2624 command_runner.go:130] >         }
	I0524 19:27:24.903622    2624 command_runner.go:130] >         ready
	I0524 19:27:24.903686    2624 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0524 19:27:24.903686    2624 command_runner.go:130] >            pods insecure
	I0524 19:27:24.903686    2624 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0524 19:27:24.903686    2624 command_runner.go:130] >            ttl 30
	I0524 19:27:24.903686    2624 command_runner.go:130] >         }
	I0524 19:27:24.903686    2624 command_runner.go:130] >         prometheus :9153
	I0524 19:27:24.903686    2624 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0524 19:27:24.903777    2624 command_runner.go:130] >            max_concurrent 1000
	I0524 19:27:24.903777    2624 command_runner.go:130] >         }
	I0524 19:27:24.903808    2624 command_runner.go:130] >         cache 30
	I0524 19:27:24.903808    2624 command_runner.go:130] >         loop
	I0524 19:27:24.903808    2624 command_runner.go:130] >         reload
	I0524 19:27:24.903853    2624 command_runner.go:130] >         loadbalance
	I0524 19:27:24.903853    2624 command_runner.go:130] >     }
	I0524 19:27:24.903853    2624 command_runner.go:130] > kind: ConfigMap
	I0524 19:27:24.903853    2624 command_runner.go:130] > metadata:
	I0524 19:27:24.903853    2624 command_runner.go:130] >   creationTimestamp: "2023-05-24T19:27:11Z"
	I0524 19:27:24.903853    2624 command_runner.go:130] >   name: coredns
	I0524 19:27:24.903853    2624 command_runner.go:130] >   namespace: kube-system
	I0524 19:27:24.903925    2624 command_runner.go:130] >   resourceVersion: "232"
	I0524 19:27:24.903925    2624 command_runner.go:130] >   uid: 51dbda2b-4334-4537-869d-860680c0ab81
	I0524 19:27:24.904170    2624 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 19:27:25.114329    2624 round_trippers.go:463] GET https://172.27.130.107:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0524 19:27:25.114329    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:25.114329    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:25.114329    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:25.118419    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:25.119130    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:25.119130    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:25 GMT
	I0524 19:27:25.119130    2624 round_trippers.go:580]     Audit-Id: a6329a82-7d90-4e7b-ae34-9b02d58f1077
	I0524 19:27:25.119130    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:25.119130    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:25.119130    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:25.119215    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:25.119215    2624 round_trippers.go:580]     Content-Length: 291
	I0524 19:27:25.119215    2624 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9016559d-3c59-4f76-8961-1b5665cb8836","resourceVersion":"367","creationTimestamp":"2023-05-24T19:27:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0524 19:27:25.119420    2624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-237000" context rescaled to 1 replicas
	I0524 19:27:25.119484    2624 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.130.107 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 19:27:25.124324    2624 out.go:177] * Verifying Kubernetes components...
	I0524 19:27:25.138747    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:27:25.317513    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:27:25.317513    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:27:25.317513    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:25.317513    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:25.320176    2624 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:27:25.318788    2624 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:27:25.323011    2624 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.130.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:27:25.324001    2624 round_trippers.go:463] GET https://172.27.130.107:8443/apis/storage.k8s.io/v1/storageclasses
	I0524 19:27:25.324001    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:25.324001    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:25.324001    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:25.324458    2624 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 19:27:25.324458    2624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0524 19:27:25.324458    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:27:25.334704    2624 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0524 19:27:25.334704    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:25.334704    2624 round_trippers.go:580]     Audit-Id: 183e8448-b8d6-4e35-8ec0-c3215f4acade
	I0524 19:27:25.334704    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:25.334704    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:25.334704    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:25.334704    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:25.335390    2624 round_trippers.go:580]     Content-Length: 109
	I0524 19:27:25.335390    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:25 GMT
	I0524 19:27:25.335390    2624 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"367"},"items":[]}
	I0524 19:27:25.335390    2624 addons.go:228] Setting addon default-storageclass=true in "multinode-237000"
	I0524 19:27:25.335390    2624 host.go:66] Checking if "multinode-237000" exists ...
	I0524 19:27:25.336664    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:27:25.822688    2624 command_runner.go:130] > configmap/coredns replaced
	I0524 19:27:25.822803    2624 start.go:916] {"host.minikube.internal": 172.27.128.1} host record injected into CoreDNS's ConfigMap
	I0524 19:27:25.824226    2624 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:27:25.824927    2624 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.130.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:27:25.827964    2624 node_ready.go:35] waiting up to 6m0s for node "multinode-237000" to be "Ready" ...
	I0524 19:27:25.828231    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:25.828258    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:25.828302    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:25.828302    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:25.831436    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:25.832142    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:25.832142    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:25.832205    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:25.832205    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:25.832205    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:25.832205    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:25 GMT
	I0524 19:27:25.832205    2624 round_trippers.go:580]     Audit-Id: 0cf5a5af-1bc7-4e07-989a-e836a594e3d0
	I0524 19:27:25.832205    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:26.126017    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:27:26.126017    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:26.126154    2624 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0524 19:27:26.126305    2624 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0524 19:27:26.126399    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:27:26.134482    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:27:26.134482    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:26.134482    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:27:26.346123    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:26.346193    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:26.346193    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:26.346252    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:26.356174    2624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0524 19:27:26.356174    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:26.356174    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:26 GMT
	I0524 19:27:26.356174    2624 round_trippers.go:580]     Audit-Id: 5053fb16-063f-48f8-bdc7-8df9614d2ce8
	I0524 19:27:26.356174    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:26.356174    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:26.356174    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:26.356174    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:26.356174    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:26.848026    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:26.848071    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:26.848110    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:26.848110    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:26.855386    2624 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:27:26.855386    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:26.855386    2624 round_trippers.go:580]     Audit-Id: 11bdb43b-b4da-4634-8790-aba1e38a4f77
	I0524 19:27:26.855386    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:26.855386    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:26.855386    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:26.855386    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:26.855386    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:26 GMT
	I0524 19:27:26.855907    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:26.927059    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:27:26.927271    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:26.927271    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:27:27.247839    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:27:27.248021    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:27.248256    2624 sshutil.go:53] new ssh client: &{IP:172.27.130.107 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:27:27.341697    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:27.341697    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:27.341779    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:27.341779    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:27.344694    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:27.344694    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:27.345622    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:27 GMT
	I0524 19:27:27.345622    2624 round_trippers.go:580]     Audit-Id: 13fa6ea8-ca27-4f00-8b27-b7f00c24cdb6
	I0524 19:27:27.345688    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:27.345688    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:27.345688    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:27.345775    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:27.346149    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:27.403363    2624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0524 19:27:27.847446    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:27.847446    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:27.847446    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:27.847446    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:27.850061    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:27.851010    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:27.851071    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:27.851071    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:27.851071    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:27.851071    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:27 GMT
	I0524 19:27:27.851071    2624 round_trippers.go:580]     Audit-Id: b97fcaa9-6f08-4526-91f1-258efa709002
	I0524 19:27:27.851071    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:27.851071    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:27.851659    2624 node_ready.go:58] node "multinode-237000" has status "Ready":"False"
	I0524 19:27:28.035474    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:27:28.035527    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:28.035949    2624 sshutil.go:53] new ssh client: &{IP:172.27.130.107 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:27:28.195703    2624 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0524 19:27:28.195703    2624 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0524 19:27:28.195703    2624 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0524 19:27:28.195703    2624 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0524 19:27:28.195703    2624 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0524 19:27:28.195703    2624 command_runner.go:130] > pod/storage-provisioner created
	I0524 19:27:28.220816    2624 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0524 19:27:28.348102    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:28.348166    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:28.348166    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:28.348166    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:28.356178    2624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:27:28.359457    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:28.359504    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:28 GMT
	I0524 19:27:28.359632    2624 round_trippers.go:580]     Audit-Id: 17ae1f4d-b348-41b6-8947-ae127a138f37
	I0524 19:27:28.359632    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:28.359687    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:28.359733    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:28.359935    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:28.359986    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:28.637006    2624 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0524 19:27:28.640221    2624 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0524 19:27:28.643687    2624 addons.go:499] enable addons completed in 4.1610833s: enabled=[storage-provisioner default-storageclass]
	I0524 19:27:28.840907    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:28.840907    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:28.840907    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:28.840907    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:28.846169    2624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:27:28.846169    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:28.846169    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:28.846483    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:28.846483    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:28.846483    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:28 GMT
	I0524 19:27:28.846483    2624 round_trippers.go:580]     Audit-Id: ac1d62b8-ded9-49cd-9a0c-c777e034deb4
	I0524 19:27:28.846532    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:28.846716    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:29.346734    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:29.346734    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:29.346734    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:29.346734    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:29.357587    2624 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0524 19:27:29.357587    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:29.357587    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:29.357587    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:29.357587    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:29 GMT
	I0524 19:27:29.357587    2624 round_trippers.go:580]     Audit-Id: 81da58d9-deb4-4193-9c90-3e5236f5fc67
	I0524 19:27:29.357587    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:29.357587    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:29.357587    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:29.847114    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:29.847114    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:29.847114    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:29.847114    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:29.851927    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:29.851927    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:29.851927    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:29.851927    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:29.852227    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:29 GMT
	I0524 19:27:29.852227    2624 round_trippers.go:580]     Audit-Id: 7f5129b9-b0a7-458c-b5a6-9e7cbfbf6206
	I0524 19:27:29.852270    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:29.852270    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:29.852270    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:29.853204    2624 node_ready.go:58] node "multinode-237000" has status "Ready":"False"
	I0524 19:27:30.347710    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:30.347780    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:30.347780    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:30.347780    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:30.351044    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:30.351044    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:30.351044    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:30 GMT
	I0524 19:27:30.351318    2624 round_trippers.go:580]     Audit-Id: 8f9bfa40-add7-4f13-8e9f-965703644a32
	I0524 19:27:30.351318    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:30.351318    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:30.351318    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:30.351318    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:30.351523    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:30.834874    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:30.834874    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:30.834874    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:30.834874    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:30.838465    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:30.838465    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:30.838465    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:30.838465    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:30.838465    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:30.838465    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:30 GMT
	I0524 19:27:30.838465    2624 round_trippers.go:580]     Audit-Id: ee246b69-6382-43b3-b3e2-ab87ac589691
	I0524 19:27:30.838465    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:30.838465    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:31.335298    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:31.335298    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:31.335298    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:31.335298    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:31.348516    2624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0524 19:27:31.348516    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:31.348516    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:31.348516    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:31.348516    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:31.348516    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:31 GMT
	I0524 19:27:31.348516    2624 round_trippers.go:580]     Audit-Id: 68852b52-537c-472a-85ca-58fdf25f85fd
	I0524 19:27:31.348516    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:31.348516    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:31.844352    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:31.844352    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:31.844352    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:31.844352    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:31.850376    2624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0524 19:27:31.850376    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:31.850376    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:31.850376    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:31.850376    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:31.850376    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:31 GMT
	I0524 19:27:31.850376    2624 round_trippers.go:580]     Audit-Id: 3156200b-af95-4406-aa71-ebeb0e34bf43
	I0524 19:27:31.850376    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:31.855351    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:31.856354    2624 node_ready.go:58] node "multinode-237000" has status "Ready":"False"
	I0524 19:27:32.349114    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:32.349114    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:32.349114    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:32.349114    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:32.367494    2624 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0524 19:27:32.367494    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:32.367494    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:32 GMT
	I0524 19:27:32.367494    2624 round_trippers.go:580]     Audit-Id: 061594b8-3ee3-4db9-a6fb-691f61e3fb1e
	I0524 19:27:32.367494    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:32.367494    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:32.367494    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:32.367494    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:32.367494    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:32.842929    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:32.842929    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:32.842929    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:32.842929    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:32.849031    2624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0524 19:27:32.849031    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:32.849031    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:32.849031    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:32 GMT
	I0524 19:27:32.849031    2624 round_trippers.go:580]     Audit-Id: 4f0f3605-8de5-4af6-87fd-1425f346b135
	I0524 19:27:32.849031    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:32.849031    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:32.849031    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:32.849031    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:33.336060    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:33.336060    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:33.336060    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:33.336060    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:33.341022    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:33.341022    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:33.341022    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:33.341022    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:33.341022    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:33.341022    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:33 GMT
	I0524 19:27:33.341022    2624 round_trippers.go:580]     Audit-Id: 3e3ac5b1-69ca-44f3-8143-ad8b292c74e9
	I0524 19:27:33.341022    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:33.341022    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:33.837550    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:33.837617    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:33.837617    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:33.837617    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:33.846126    2624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:27:33.846126    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:33.846126    2624 round_trippers.go:580]     Audit-Id: d953d317-3135-4517-84cf-9dddd7f4b267
	I0524 19:27:33.846126    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:33.846126    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:33.846126    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:33.846126    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:33.846126    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:33 GMT
	I0524 19:27:33.847090    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:34.338481    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:34.338910    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:34.338910    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:34.338910    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:34.342377    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:34.342377    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:34.343038    2624 round_trippers.go:580]     Audit-Id: 00512ece-ab99-49f3-95fb-83ae21f8202b
	I0524 19:27:34.343038    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:34.343038    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:34.343038    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:34.343216    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:34.343216    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:34 GMT
	I0524 19:27:34.343436    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:34.343653    2624 node_ready.go:58] node "multinode-237000" has status "Ready":"False"
	I0524 19:27:34.840849    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:34.840912    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:34.840912    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:34.840912    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:34.845762    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:34.845762    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:34.845762    2624 round_trippers.go:580]     Audit-Id: 90fb34a2-1016-4ac6-be3d-560764427f98
	I0524 19:27:34.845762    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:34.846290    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:34.846290    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:34.846330    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:34.846330    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:34 GMT
	I0524 19:27:34.846330    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:35.346912    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:35.346993    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:35.346993    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:35.346993    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:35.350829    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:35.350891    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:35.350891    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:35.350891    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:35.350891    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:35.350891    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:35.350891    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:35 GMT
	I0524 19:27:35.350891    2624 round_trippers.go:580]     Audit-Id: 92623cb5-f0dc-409a-98ae-d6229495922e
	I0524 19:27:35.350891    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:35.845928    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:35.846000    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:35.846000    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:35.846000    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:35.849347    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:35.849347    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:35.850060    2624 round_trippers.go:580]     Audit-Id: 55ead3ef-66e3-49ac-8ef4-9d40edbecb6c
	I0524 19:27:35.850060    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:35.850060    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:35.850060    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:35.850060    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:35.850123    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:35 GMT
	I0524 19:27:35.850348    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:36.340234    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:36.340234    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:36.340234    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:36.340234    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:36.343878    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:36.344424    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:36.344424    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:36.344424    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:36.344547    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:36.344580    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:36 GMT
	I0524 19:27:36.344580    2624 round_trippers.go:580]     Audit-Id: 5d892932-d031-4d06-a84e-1c61cc0371d7
	I0524 19:27:36.344580    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:36.344833    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:36.345026    2624 node_ready.go:58] node "multinode-237000" has status "Ready":"False"
	I0524 19:27:36.840480    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:36.840553    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:36.840622    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:36.840622    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:36.845008    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:36.845008    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:36.845735    2624 round_trippers.go:580]     Audit-Id: 11250f38-7aa0-4ded-b208-de1e4c70fd17
	I0524 19:27:36.845735    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:36.845735    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:36.845735    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:36.845735    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:36.845735    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:36 GMT
	I0524 19:27:36.846044    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:37.342747    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:37.342747    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:37.342747    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:37.342747    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:37.346218    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:37.346218    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:37.346218    2624 round_trippers.go:580]     Audit-Id: 6b276e7f-b9ef-4f1b-9d33-30220c49e6be
	I0524 19:27:37.346218    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:37.346218    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:37.346218    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:37.346218    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:37.346218    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:37 GMT
	I0524 19:27:37.346773    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:37.842469    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:37.842469    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:37.842469    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:37.842469    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:37.845870    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:37.846952    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:37.846952    2624 round_trippers.go:580]     Audit-Id: 956dc497-6845-48cb-bc52-8f916cf7523b
	I0524 19:27:37.846952    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:37.846952    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:37.847038    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:37.847038    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:37.847038    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:37 GMT
	I0524 19:27:37.847223    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:38.343009    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:38.343009    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:38.343009    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:38.343149    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:38.345641    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:38.345641    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:38.345641    2624 round_trippers.go:580]     Audit-Id: 4a78444b-055e-43b8-9729-5d19393ab392
	I0524 19:27:38.345641    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:38.345641    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:38.345641    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:38.345641    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:38.345641    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:38 GMT
	I0524 19:27:38.349185    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"343","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4930 chars]
	I0524 19:27:38.349615    2624 node_ready.go:58] node "multinode-237000" has status "Ready":"False"
	I0524 19:27:38.844758    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:38.844758    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:38.844758    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:38.844758    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:38.849636    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:38.850148    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:38.850148    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:38.850148    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:38.850148    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:38.850262    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:38 GMT
	I0524 19:27:38.850262    2624 round_trippers.go:580]     Audit-Id: 71dbdbfc-0363-4947-9dcb-787ba537c6fd
	I0524 19:27:38.850262    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:38.850361    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:38.851313    2624 node_ready.go:49] node "multinode-237000" has status "Ready":"True"
	I0524 19:27:38.851313    2624 node_ready.go:38] duration metric: took 13.0232475s waiting for node "multinode-237000" to be "Ready" ...
	I0524 19:27:38.851313    2624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:27:38.851427    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods
	I0524 19:27:38.851509    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:38.851509    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:38.851568    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:38.857550    2624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:27:38.857550    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:38.857550    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:38.857550    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:38.857621    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:38.857621    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:38.857655    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:38 GMT
	I0524 19:27:38.857686    2624 round_trippers.go:580]     Audit-Id: 7ec737ea-2dcb-4228-b156-26f83317e301
	I0524 19:27:38.859734    2624 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"406"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"405","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54011 chars]
	I0524 19:27:38.864479    2624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:38.865012    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:27:38.865113    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:38.865113    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:38.865153    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:38.868844    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:38.869337    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:38.869337    2624 round_trippers.go:580]     Audit-Id: a3751ff9-15e0-480d-8078-defc1d5d6fb8
	I0524 19:27:38.869337    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:38.869337    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:38.869337    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:38.869426    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:38.869426    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:38 GMT
	I0524 19:27:38.869566    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"405","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0524 19:27:38.869622    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:38.869622    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:38.869622    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:38.869622    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:38.880241    2624 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0524 19:27:38.880375    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:38.880375    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:38.880375    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:38.880375    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:38.880375    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:38.880375    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:38 GMT
	I0524 19:27:38.880375    2624 round_trippers.go:580]     Audit-Id: 6f1d1bd5-d7a5-45fd-bf4d-5c8b3d3fc986
	I0524 19:27:38.880732    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:39.385240    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:27:39.385298    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:39.385359    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:39.385359    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:39.389818    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:39.389818    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:39.390044    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:39.390044    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:39.390044    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:39.390080    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:39 GMT
	I0524 19:27:39.390080    2624 round_trippers.go:580]     Audit-Id: 4e9e2438-0773-415f-8bf3-39ec2738d7b6
	I0524 19:27:39.390114    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:39.390313    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"405","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0524 19:27:39.390962    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:39.391015    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:39.391015    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:39.391015    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:39.394341    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:39.394341    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:39.394341    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:39.394341    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:39.394341    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:39 GMT
	I0524 19:27:39.394341    2624 round_trippers.go:580]     Audit-Id: a9c5623c-da42-48b5-93ab-e76daa608a1c
	I0524 19:27:39.394341    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:39.394341    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:39.394341    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:39.890769    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:27:39.890769    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:39.890831    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:39.890831    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:39.895226    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:39.895226    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:39.895893    2624 round_trippers.go:580]     Audit-Id: 381378e0-cb19-489e-8468-709fe491f2b3
	I0524 19:27:39.895893    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:39.895893    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:39.895893    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:39.895893    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:39.895998    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:39 GMT
	I0524 19:27:39.896332    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"405","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0524 19:27:39.897072    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:39.897072    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:39.897072    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:39.897132    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:39.899298    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:39.899992    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:39.899992    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:39 GMT
	I0524 19:27:39.899992    2624 round_trippers.go:580]     Audit-Id: 41eadbb9-575b-4a50-a0d8-1dcf0e4ddadf
	I0524 19:27:39.899992    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:39.899992    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:39.899992    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:39.900060    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:39.900354    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:40.389080    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:27:40.389266    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:40.389266    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:40.389266    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:40.392710    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:40.392929    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:40.392929    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:40.392929    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:40.392929    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:40.392929    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:40.392929    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:40 GMT
	I0524 19:27:40.392929    2624 round_trippers.go:580]     Audit-Id: 7c3753cb-1e04-4e65-ac3d-8b0955b42794
	I0524 19:27:40.393161    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"405","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0524 19:27:40.393812    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:40.393812    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:40.393812    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:40.393812    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:40.396775    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:40.396775    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:40.396775    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:40.396775    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:40 GMT
	I0524 19:27:40.396775    2624 round_trippers.go:580]     Audit-Id: 7ad3af59-ed6b-4176-9ed3-43b556e79084
	I0524 19:27:40.396775    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:40.396775    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:40.396775    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:40.396775    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:40.896049    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:27:40.896104    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:40.896104    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:40.896104    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:40.904477    2624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:27:40.904477    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:40.904477    2624 round_trippers.go:580]     Audit-Id: 3ce230eb-f7ed-4626-9b76-d3372277f96e
	I0524 19:27:40.904477    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:40.904477    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:40.904477    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:40.904477    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:40.904477    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:40 GMT
	I0524 19:27:40.904477    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"405","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0524 19:27:40.904477    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:40.904477    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:40.905931    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:40.905931    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:40.909001    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:40.909001    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:40.909001    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:40.909001    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:40.909001    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:40.909001    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:40.909001    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:40 GMT
	I0524 19:27:40.909001    2624 round_trippers.go:580]     Audit-Id: bafe78a9-b2dc-4134-87c1-94d9fa9f5e12
	I0524 19:27:40.909001    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:40.909972    2624 pod_ready.go:102] pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace has status "Ready":"False"
	I0524 19:27:41.393005    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:27:41.393005    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:41.393225    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:41.393225    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:41.397494    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:41.397494    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:41.398229    2624 round_trippers.go:580]     Audit-Id: 75d689d7-278d-4de9-96da-2c7b64195a88
	I0524 19:27:41.398229    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:41.398229    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:41.398229    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:41.398229    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:41.398322    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:41 GMT
	I0524 19:27:41.398322    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"405","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0524 19:27:41.399228    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:41.399314    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:41.399314    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:41.399314    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:41.402047    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:41.402047    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:41.402047    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:41.402047    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:41.402047    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:41 GMT
	I0524 19:27:41.402047    2624 round_trippers.go:580]     Audit-Id: ce29d205-c1f8-48df-a7a6-59dee1edf751
	I0524 19:27:41.402047    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:41.402047    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:41.402047    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:41.883126    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:27:41.883226    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:41.883226    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:41.883226    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:41.886993    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:41.887163    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:41.887163    2624 round_trippers.go:580]     Audit-Id: 47c8d40e-6503-4bd5-860a-45cfea75840d
	I0524 19:27:41.887163    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:41.887163    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:41.887163    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:41.887163    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:41.887163    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:41 GMT
	I0524 19:27:41.887437    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"405","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0524 19:27:41.888187    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:41.888187    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:41.888187    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:41.888187    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:41.891004    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:41.891004    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:41.891004    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:41.891871    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:41.891871    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:41.891871    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:41 GMT
	I0524 19:27:41.891871    2624 round_trippers.go:580]     Audit-Id: 6adf8772-2e8b-49d7-a9b0-6b3364e623ff
	I0524 19:27:41.891871    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:41.892286    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:42.384511    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:27:42.384511    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.384511    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.384511    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.392921    2624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:27:42.392921    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.392921    2624 round_trippers.go:580]     Audit-Id: 7a018ef8-d159-4c55-9fb3-b33c18e90bef
	I0524 19:27:42.392921    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.392921    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.392921    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.392921    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.392921    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.392921    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"422","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0524 19:27:42.393767    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:42.393767    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.393767    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.393767    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.396784    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:42.396784    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.396784    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.396784    2624 round_trippers.go:580]     Audit-Id: 8b48e5f6-83bc-4d19-8dc9-d65527388ff9
	I0524 19:27:42.396784    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.396784    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.396784    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.396784    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.396784    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:42.397884    2624 pod_ready.go:92] pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace has status "Ready":"True"
	I0524 19:27:42.397935    2624 pod_ready.go:81] duration metric: took 3.5334063s waiting for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.397935    2624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.397979    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-237000
	I0524 19:27:42.397979    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.397979    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.397979    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.400593    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:42.401347    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.401347    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.401347    2624 round_trippers.go:580]     Audit-Id: da027dde-5226-43d6-aeb8-a0a16f8018fb
	I0524 19:27:42.401347    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.401402    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.401402    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.401476    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.401630    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-237000","namespace":"kube-system","uid":"981422ac-e671-44a5-9ad2-b1d9e5ff7133","resourceVersion":"389","creationTimestamp":"2023-05-24T19:27:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.130.107:2379","kubernetes.io/config.hash":"b50925fc64d689df6b7c835d5181c1ec","kubernetes.io/config.mirror":"b50925fc64d689df6b7c835d5181c1ec","kubernetes.io/config.seen":"2023-05-24T19:27:12.143962733Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0524 19:27:42.401630    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:42.401630    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.401630    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.402213    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.405125    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:42.405125    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.405225    2624 round_trippers.go:580]     Audit-Id: 65cfac4f-2f8c-44cd-b8b5-9eca05a1865c
	I0524 19:27:42.405225    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.405225    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.405287    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.405287    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.405350    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.405640    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:42.406308    2624 pod_ready.go:92] pod "etcd-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:27:42.406308    2624 pod_ready.go:81] duration metric: took 8.3736ms waiting for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.406308    2624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.406308    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-237000
	I0524 19:27:42.406308    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.406308    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.406308    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.408902    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:42.408902    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.409127    2624 round_trippers.go:580]     Audit-Id: 377fb97c-bd5f-4377-b852-90afa2fa9573
	I0524 19:27:42.409127    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.409127    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.409183    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.409183    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.409253    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.409584    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-237000","namespace":"kube-system","uid":"a516131e-ab1a-41f9-95ca-cbfb556e1380","resourceVersion":"390","creationTimestamp":"2023-05-24T19:27:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.130.107:8443","kubernetes.io/config.hash":"9df549a886a8b8feca4108c5fa576f3b","kubernetes.io/config.mirror":"9df549a886a8b8feca4108c5fa576f3b","kubernetes.io/config.seen":"2023-05-24T19:27:00.264374544Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0524 19:27:42.409584    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:42.410183    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.410232    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.410232    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.412637    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:42.412637    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.412637    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.412637    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.412637    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.412637    2624 round_trippers.go:580]     Audit-Id: 9793cf30-aa05-436a-83bd-c8e8d9a537dd
	I0524 19:27:42.412637    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.412637    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.413673    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:42.413673    2624 pod_ready.go:92] pod "kube-apiserver-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:27:42.413673    2624 pod_ready.go:81] duration metric: took 7.3645ms waiting for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.413673    2624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.413673    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-237000
	I0524 19:27:42.413673    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.413673    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.413673    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.416632    2624 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:27:42.416632    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.416632    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.416632    2624 round_trippers.go:580]     Audit-Id: 52a81e62-7b6f-4330-9b28-39b4c64dbf70
	I0524 19:27:42.416632    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.416632    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.416632    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.416632    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.416632    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-237000","namespace":"kube-system","uid":"1ff7b570-afe4-4076-989f-d0377d04f9d5","resourceVersion":"387","creationTimestamp":"2023-05-24T19:27:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.mirror":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.seen":"2023-05-24T19:27:00.264375644Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0524 19:27:42.418163    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:42.418163    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.418289    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.418289    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.424995    2624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0524 19:27:42.424995    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.424995    2624 round_trippers.go:580]     Audit-Id: ea704845-cc79-43af-b724-a6f7805ce77d
	I0524 19:27:42.424995    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.424995    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.424995    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.424995    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.424995    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.424995    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:42.424995    2624 pod_ready.go:92] pod "kube-controller-manager-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:27:42.424995    2624 pod_ready.go:81] duration metric: took 11.3221ms waiting for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.424995    2624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.424995    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:27:42.424995    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.424995    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.424995    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.428168    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:42.429173    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.429173    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.429209    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.429209    2624 round_trippers.go:580]     Audit-Id: 9e633206-de5b-4025-b6b2-eeea7da5a4e0
	I0524 19:27:42.429209    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.429257    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.429257    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.429498    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r6f94","generateName":"kube-proxy-","namespace":"kube-system","uid":"90a232cf-33b3-4e3b-82bf-9050d39109d1","resourceVersion":"385","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5535 chars]
	I0524 19:27:42.430020    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:42.430086    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.430086    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.430086    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.433136    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:42.433624    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.433624    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.433687    2624 round_trippers.go:580]     Audit-Id: d1db1777-0c9f-4b57-b16d-29c6f4234e78
	I0524 19:27:42.433687    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.433687    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.433687    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.433687    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.433687    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:42.434405    2624 pod_ready.go:92] pod "kube-proxy-r6f94" in "kube-system" namespace has status "Ready":"True"
	I0524 19:27:42.434405    2624 pod_ready.go:81] duration metric: took 9.41ms waiting for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.434405    2624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.588909    2624 request.go:628] Waited for 154.1431ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:27:42.589051    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:27:42.589051    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.589322    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.589322    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.593985    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:42.594239    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.594239    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.594239    2624 round_trippers.go:580]     Audit-Id: 45428373-1236-4703-abca-8ddde26fc26b
	I0524 19:27:42.594239    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.594239    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.594307    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.594404    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.594704    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-237000","namespace":"kube-system","uid":"a55c419f-1b04-4895-9fd5-02dd67cd888f","resourceVersion":"388","creationTimestamp":"2023-05-24T19:27:12Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.mirror":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.seen":"2023-05-24T19:27:12.143961333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0524 19:27:42.797244    2624 request.go:628] Waited for 201.6407ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:42.797459    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:27:42.797459    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.797459    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.797550    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.801498    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:42.802217    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.802217    2624 round_trippers.go:580]     Audit-Id: ebf43a82-2137-4428-bf56-131488df219c
	I0524 19:27:42.802217    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.802217    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.802298    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.802298    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.802298    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.802448    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"399","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4785 chars]
	I0524 19:27:42.802974    2624 pod_ready.go:92] pod "kube-scheduler-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:27:42.803033    2624 pod_ready.go:81] duration metric: took 368.6279ms waiting for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:27:42.803033    2624 pod_ready.go:38] duration metric: took 3.9517212s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:27:42.803033    2624 api_server.go:52] waiting for apiserver process to appear ...
	I0524 19:27:42.812106    2624 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:27:42.836061    2624 command_runner.go:130] > 1952
	I0524 19:27:42.836061    2624 api_server.go:72] duration metric: took 17.7164984s to wait for apiserver process to appear ...
	I0524 19:27:42.836061    2624 api_server.go:88] waiting for apiserver healthz status ...
	I0524 19:27:42.836061    2624 api_server.go:253] Checking apiserver healthz at https://172.27.130.107:8443/healthz ...
	I0524 19:27:42.845469    2624 api_server.go:279] https://172.27.130.107:8443/healthz returned 200:
	ok
	I0524 19:27:42.846220    2624 round_trippers.go:463] GET https://172.27.130.107:8443/version
	I0524 19:27:42.846220    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.846220    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.846220    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.851775    2624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:27:42.851775    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.851775    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.851775    2624 round_trippers.go:580]     Content-Length: 263
	I0524 19:27:42.851775    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.851775    2624 round_trippers.go:580]     Audit-Id: 2c37608a-698c-4a79-9679-40157a1f52bb
	I0524 19:27:42.851775    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.851775    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.852311    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.852311    2624 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.2",
	  "gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
	  "gitTreeState": "clean",
	  "buildDate": "2023-05-17T14:13:28Z",
	  "goVersion": "go1.20.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0524 19:27:42.852474    2624 api_server.go:141] control plane version: v1.27.2
	I0524 19:27:42.852474    2624 api_server.go:131] duration metric: took 16.4135ms to wait for apiserver health ...
	I0524 19:27:42.852474    2624 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 19:27:42.985764    2624 request.go:628] Waited for 133.0141ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods
	I0524 19:27:42.985969    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods
	I0524 19:27:42.985969    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:42.986064    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:42.986064    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:42.990481    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:42.990481    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:42.990481    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:42.990481    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:42 GMT
	I0524 19:27:42.990481    2624 round_trippers.go:580]     Audit-Id: 872b3d6a-3295-4999-98ba-f7fb616f4beb
	I0524 19:27:42.990481    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:42.990481    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:42.991307    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:42.992664    2624 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"422","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I0524 19:27:42.995468    2624 system_pods.go:59] 8 kube-system pods found
	I0524 19:27:42.995647    2624 system_pods.go:61] "coredns-5d78c9869d-qhx48" [12d04c63-9898-4ccf-9e6d-92d8f3d086a4] Running
	I0524 19:27:42.995647    2624 system_pods.go:61] "etcd-multinode-237000" [981422ac-e671-44a5-9ad2-b1d9e5ff7133] Running
	I0524 19:27:42.995647    2624 system_pods.go:61] "kindnet-xgkpb" [92abc556-b250-4017-9b7c-0fed1aefe2d6] Running
	I0524 19:27:42.995740    2624 system_pods.go:61] "kube-apiserver-multinode-237000" [a516131e-ab1a-41f9-95ca-cbfb556e1380] Running
	I0524 19:27:42.995740    2624 system_pods.go:61] "kube-controller-manager-multinode-237000" [1ff7b570-afe4-4076-989f-d0377d04f9d5] Running
	I0524 19:27:42.995740    2624 system_pods.go:61] "kube-proxy-r6f94" [90a232cf-33b3-4e3b-82bf-9050d39109d1] Running
	I0524 19:27:42.995740    2624 system_pods.go:61] "kube-scheduler-multinode-237000" [a55c419f-1b04-4895-9fd5-02dd67cd888f] Running
	I0524 19:27:42.995740    2624 system_pods.go:61] "storage-provisioner" [6498131a-f2e2-4098-9a5f-6c277fae3747] Running
	I0524 19:27:42.995740    2624 system_pods.go:74] duration metric: took 143.2663ms to wait for pod list to return data ...
	I0524 19:27:42.995740    2624 default_sa.go:34] waiting for default service account to be created ...
	I0524 19:27:43.189442    2624 request.go:628] Waited for 193.5996ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/namespaces/default/serviceaccounts
	I0524 19:27:43.189442    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/default/serviceaccounts
	I0524 19:27:43.189442    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:43.189442    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:43.189442    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:43.194193    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:27:43.194193    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:43.194193    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:43.194193    2624 round_trippers.go:580]     Content-Length: 261
	I0524 19:27:43.194193    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:43 GMT
	I0524 19:27:43.194193    2624 round_trippers.go:580]     Audit-Id: 72ef3f3f-304c-443c-80f6-c087c410fff7
	I0524 19:27:43.194193    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:43.194692    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:43.194692    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:43.194692    2624 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"427"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"25405341-c1be-4363-86d7-6385725f43ef","resourceVersion":"324","creationTimestamp":"2023-05-24T19:27:24Z"}}]}
	I0524 19:27:43.195168    2624 default_sa.go:45] found service account: "default"
	I0524 19:27:43.195265    2624 default_sa.go:55] duration metric: took 199.5247ms for default service account to be created ...
	I0524 19:27:43.195265    2624 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 19:27:43.391922    2624 request.go:628] Waited for 196.5709ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods
	I0524 19:27:43.392121    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods
	I0524 19:27:43.392121    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:43.392121    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:43.392121    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:43.400830    2624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:27:43.400830    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:43.400830    2624 round_trippers.go:580]     Audit-Id: 80e541e7-583b-4129-962e-d68015243e12
	I0524 19:27:43.400830    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:43.400830    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:43.400830    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:43.400830    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:43.400830    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:43 GMT
	I0524 19:27:43.401709    2624 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"422","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54127 chars]
	I0524 19:27:43.405335    2624 system_pods.go:86] 8 kube-system pods found
	I0524 19:27:43.405475    2624 system_pods.go:89] "coredns-5d78c9869d-qhx48" [12d04c63-9898-4ccf-9e6d-92d8f3d086a4] Running
	I0524 19:27:43.405475    2624 system_pods.go:89] "etcd-multinode-237000" [981422ac-e671-44a5-9ad2-b1d9e5ff7133] Running
	I0524 19:27:43.405475    2624 system_pods.go:89] "kindnet-xgkpb" [92abc556-b250-4017-9b7c-0fed1aefe2d6] Running
	I0524 19:27:43.405475    2624 system_pods.go:89] "kube-apiserver-multinode-237000" [a516131e-ab1a-41f9-95ca-cbfb556e1380] Running
	I0524 19:27:43.405475    2624 system_pods.go:89] "kube-controller-manager-multinode-237000" [1ff7b570-afe4-4076-989f-d0377d04f9d5] Running
	I0524 19:27:43.405475    2624 system_pods.go:89] "kube-proxy-r6f94" [90a232cf-33b3-4e3b-82bf-9050d39109d1] Running
	I0524 19:27:43.405475    2624 system_pods.go:89] "kube-scheduler-multinode-237000" [a55c419f-1b04-4895-9fd5-02dd67cd888f] Running
	I0524 19:27:43.405475    2624 system_pods.go:89] "storage-provisioner" [6498131a-f2e2-4098-9a5f-6c277fae3747] Running
	I0524 19:27:43.405475    2624 system_pods.go:126] duration metric: took 210.2099ms to wait for k8s-apps to be running ...
	I0524 19:27:43.405475    2624 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 19:27:43.415033    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:27:43.437191    2624 system_svc.go:56] duration metric: took 31.1295ms WaitForService to wait for kubelet.
	I0524 19:27:43.437191    2624 kubeadm.go:581] duration metric: took 18.3176293s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 19:27:43.437191    2624 node_conditions.go:102] verifying NodePressure condition ...
	I0524 19:27:43.593224    2624 request.go:628] Waited for 155.7948ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/nodes
	I0524 19:27:43.593522    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes
	I0524 19:27:43.593522    2624 round_trippers.go:469] Request Headers:
	I0524 19:27:43.593522    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:27:43.593617    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:27:43.597305    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:27:43.598353    2624 round_trippers.go:577] Response Headers:
	I0524 19:27:43.598353    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:27:43.598383    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:27:43.598383    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:27:43.598383    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:27:43.598464    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:27:43 GMT
	I0524 19:27:43.598464    2624 round_trippers.go:580]     Audit-Id: a288c5a8-bf01-44a8-8d5e-73c21d9539de
	I0524 19:27:43.598693    2624 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"428"},"items":[{"metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"428","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5014 chars]
	I0524 19:27:43.599304    2624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:27:43.599304    2624 node_conditions.go:123] node cpu capacity is 2
	I0524 19:27:43.599304    2624 node_conditions.go:105] duration metric: took 162.1124ms to run NodePressure ...
	I0524 19:27:43.599535    2624 start.go:228] waiting for startup goroutines ...
	I0524 19:27:43.599535    2624 start.go:233] waiting for cluster config update ...
	I0524 19:27:43.599535    2624 start.go:242] writing updated cluster config ...
	I0524 19:27:43.603618    2624 out.go:177] 
	I0524 19:27:43.613445    2624 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:27:43.613445    2624 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:27:43.619613    2624 out.go:177] * Starting worker node multinode-237000-m02 in cluster multinode-237000
	I0524 19:27:43.620826    2624 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:27:43.620826    2624 cache.go:57] Caching tarball of preloaded images
	I0524 19:27:43.622124    2624 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0524 19:27:43.622372    2624 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 19:27:43.622372    2624 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:27:43.625338    2624 cache.go:195] Successfully downloaded all kic artifacts
	I0524 19:27:43.625415    2624 start.go:364] acquiring machines lock for multinode-237000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 19:27:43.625525    2624 start.go:368] acquired machines lock for "multinode-237000-m02" in 110.1µs
	I0524 19:27:43.625755    2624 start.go:93] Provisioning new machine with config: &{Name:multinode-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.130.107 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0524 19:27:43.625915    2624 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0524 19:27:43.629551    2624 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0524 19:27:43.629799    2624 start.go:159] libmachine.API.Create for "multinode-237000" (driver="hyperv")
	I0524 19:27:43.629799    2624 client.go:168] LocalClient.Create starting
	I0524 19:27:43.630497    2624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0524 19:27:43.630529    2624 main.go:141] libmachine: Decoding PEM data...
	I0524 19:27:43.630529    2624 main.go:141] libmachine: Parsing certificate...
	I0524 19:27:43.630529    2624 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0524 19:27:43.631070    2624 main.go:141] libmachine: Decoding PEM data...
	I0524 19:27:43.631070    2624 main.go:141] libmachine: Parsing certificate...
	I0524 19:27:43.631240    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0524 19:27:44.063126    2624 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0524 19:27:44.063311    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:44.063311    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0524 19:27:44.716039    2624 main.go:141] libmachine: [stdout =====>] : False
	
	I0524 19:27:44.716116    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:44.716116    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0524 19:27:45.241985    2624 main.go:141] libmachine: [stdout =====>] : True
	
	I0524 19:27:45.241985    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:45.241985    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0524 19:27:46.821798    2624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0524 19:27:46.821884    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:46.824879    2624 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.30.1-1684536668-16501-amd64.iso...
	I0524 19:27:47.268045    2624 main.go:141] libmachine: Creating SSH key...
	I0524 19:27:47.444848    2624 main.go:141] libmachine: Creating VM...
	I0524 19:27:47.444848    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0524 19:27:48.863997    2624 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0524 19:27:48.864162    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:48.864162    2624 main.go:141] libmachine: Using switch "Default Switch"
	I0524 19:27:48.864435    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0524 19:27:49.546518    2624 main.go:141] libmachine: [stdout =====>] : True
	
	I0524 19:27:49.546736    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:49.546736    2624 main.go:141] libmachine: Creating VHD
	I0524 19:27:49.546830    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0524 19:27:51.271771    2624 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : FDD461A4-84AF-40CD-B505-283E9B01DB08
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0524 19:27:51.271931    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:51.272003    2624 main.go:141] libmachine: Writing magic tar header
	I0524 19:27:51.272003    2624 main.go:141] libmachine: Writing SSH key tar header
	I0524 19:27:51.281409    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0524 19:27:53.050560    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:27:53.050862    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:53.050862    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\disk.vhd' -SizeBytes 20000MB
	I0524 19:27:54.278010    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:27:54.278064    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:54.278227    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-237000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0524 19:27:56.219453    2624 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-237000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0524 19:27:56.219453    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:56.219550    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-237000-m02 -DynamicMemoryEnabled $false
	I0524 19:27:57.074346    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:27:57.074346    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:57.074346    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-237000-m02 -Count 2
	I0524 19:27:57.884732    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:27:57.884953    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:57.885025    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-237000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\boot2docker.iso'
	I0524 19:27:59.016579    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:27:59.016963    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:27:59.016963    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-237000-m02 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\disk.vhd'
	I0524 19:28:00.325682    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:00.325682    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:00.325682    2624 main.go:141] libmachine: Starting VM...
	I0524 19:28:00.325682    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-237000-m02
	I0524 19:28:02.074150    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:02.074150    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:02.074150    2624 main.go:141] libmachine: Waiting for host to start...
	I0524 19:28:02.074150    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:02.882080    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:02.882080    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:02.882270    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:03.994853    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:03.994853    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:05.010445    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:05.802999    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:05.802999    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:05.802999    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:06.879090    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:06.879131    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:07.882873    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:08.692672    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:08.692672    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:08.692798    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:09.769435    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:09.769675    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:10.784588    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:11.560580    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:11.560715    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:11.560814    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:12.641471    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:12.641471    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:13.642760    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:14.401445    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:14.401485    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:14.401560    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:15.495561    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:15.495699    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:16.510618    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:17.271533    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:17.271655    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:17.271655    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:18.337320    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:18.337320    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:19.337683    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:20.117467    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:20.117467    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:20.117734    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:21.234748    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:21.234816    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:22.238071    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:22.999003    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:22.999003    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:22.999100    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:24.074410    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:24.074410    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:25.088512    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:25.868008    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:25.868046    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:25.868224    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:26.932929    2624 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:28:26.932929    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:27.948225    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:28.710026    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:28.710026    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:28.710123    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:29.840014    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:29.840246    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:29.840304    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:30.603848    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:30.603848    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:30.604035    2624 machine.go:88] provisioning docker machine ...
	I0524 19:28:30.604035    2624 buildroot.go:166] provisioning hostname "multinode-237000-m02"
	I0524 19:28:30.604035    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:31.384611    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:31.384658    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:31.384727    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:32.489699    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:32.489699    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:32.495003    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:28:32.495921    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.128.127 22 <nil> <nil>}
	I0524 19:28:32.495984    2624 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-237000-m02 && echo "multinode-237000-m02" | sudo tee /etc/hostname
	I0524 19:28:32.679201    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-237000-m02
	
	I0524 19:28:32.679290    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:33.467538    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:33.467538    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:33.467616    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:34.526362    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:34.526529    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:34.532635    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:28:34.533464    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.128.127 22 <nil> <nil>}
	I0524 19:28:34.533464    2624 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-237000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-237000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-237000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 19:28:34.687929    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 19:28:34.687929    2624 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0524 19:28:34.687929    2624 buildroot.go:174] setting up certificates
	I0524 19:28:34.687929    2624 provision.go:83] configureAuth start
	I0524 19:28:34.687929    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:35.443799    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:35.443799    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:35.443799    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:36.569091    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:36.569178    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:36.569178    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:37.309442    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:37.309442    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:37.309521    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:38.388797    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:38.388797    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:38.388797    2624 provision.go:138] copyHostCerts
	I0524 19:28:38.389052    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0524 19:28:38.389497    2624 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 19:28:38.389497    2624 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 19:28:38.389926    2624 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 19:28:38.390883    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0524 19:28:38.390883    2624 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 19:28:38.390883    2624 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 19:28:38.391754    2624 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 19:28:38.393320    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0524 19:28:38.393563    2624 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 19:28:38.393563    2624 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 19:28:38.393877    2624 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 19:28:38.394846    2624 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-237000-m02 san=[172.27.128.127 172.27.128.127 localhost 127.0.0.1 minikube multinode-237000-m02]
	I0524 19:28:38.625406    2624 provision.go:172] copyRemoteCerts
	I0524 19:28:38.635453    2624 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 19:28:38.635453    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:39.389217    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:39.389381    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:39.389381    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:40.462377    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:40.462377    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:40.462377    2624 sshutil.go:53] new ssh client: &{IP:172.27.128.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\id_rsa Username:docker}
	I0524 19:28:40.573644    2624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.9380752s)
	I0524 19:28:40.573644    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0524 19:28:40.573644    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 19:28:40.625445    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0524 19:28:40.625817    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0524 19:28:40.665575    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0524 19:28:40.665575    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 19:28:40.706057    2624 provision.go:86] duration metric: configureAuth took 6.0181303s
	I0524 19:28:40.706057    2624 buildroot.go:189] setting minikube options for container-runtime
	I0524 19:28:40.707402    2624 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:28:40.707451    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:41.454379    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:41.454379    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:41.454379    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:42.548816    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:42.548816    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:42.553535    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:28:42.554305    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.128.127 22 <nil> <nil>}
	I0524 19:28:42.554305    2624 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 19:28:42.697478    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 19:28:42.697614    2624 buildroot.go:70] root file system type: tmpfs
	I0524 19:28:42.698046    2624 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 19:28:42.698189    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:43.448208    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:43.448208    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:43.448208    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:44.522289    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:44.522521    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:44.527385    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:28:44.528370    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.128.127 22 <nil> <nil>}
	I0524 19:28:44.528487    2624 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.130.107"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 19:28:44.693739    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.130.107
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 19:28:44.693801    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:45.468241    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:45.468241    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:45.468330    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:46.529038    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:46.529038    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:46.534292    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:28:46.535145    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.128.127 22 <nil> <nil>}
	I0524 19:28:46.535145    2624 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 19:28:47.645291    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 19:28:47.645356    2624 machine.go:91] provisioned docker machine in 17.0413278s
	I0524 19:28:47.645356    2624 client.go:171] LocalClient.Create took 1m4.0155831s
	I0524 19:28:47.645356    2624 start.go:167] duration metric: libmachine.API.Create for "multinode-237000" took 1m4.0155831s
	I0524 19:28:47.645356    2624 start.go:300] post-start starting for "multinode-237000-m02" (driver="hyperv")
	I0524 19:28:47.645356    2624 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 19:28:47.653659    2624 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 19:28:47.653659    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:48.403110    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:48.403298    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:48.403471    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:49.492916    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:49.493125    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:49.493601    2624 sshutil.go:53] new ssh client: &{IP:172.27.128.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\id_rsa Username:docker}
	I0524 19:28:49.608000    2624 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.9543416s)
	I0524 19:28:49.618266    2624 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 19:28:49.625208    2624 command_runner.go:130] > NAME=Buildroot
	I0524 19:28:49.625310    2624 command_runner.go:130] > VERSION=2021.02.12-1-g419828a-dirty
	I0524 19:28:49.625310    2624 command_runner.go:130] > ID=buildroot
	I0524 19:28:49.625310    2624 command_runner.go:130] > VERSION_ID=2021.02.12
	I0524 19:28:49.625310    2624 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0524 19:28:49.625450    2624 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 19:28:49.625607    2624 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 19:28:49.626342    2624 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 19:28:49.628091    2624 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 19:28:49.628091    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> /etc/ssl/certs/65602.pem
	I0524 19:28:49.639201    2624 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 19:28:49.653327    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 19:28:49.697359    2624 start.go:303] post-start completed in 2.0520035s
	I0524 19:28:49.700005    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:50.456168    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:50.456407    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:50.456525    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:51.517231    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:51.517231    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:51.517624    2624 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:28:51.520354    2624 start.go:128] duration metric: createHost completed in 1m7.8944668s
	I0524 19:28:51.520354    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:52.294906    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:52.294967    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:52.294967    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:53.374654    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:53.374654    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:53.379411    2624 main.go:141] libmachine: Using SSH client type: native
	I0524 19:28:53.380085    2624 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.128.127 22 <nil> <nil>}
	I0524 19:28:53.380085    2624 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 19:28:53.520655    2624 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684956533.518743375
	
	I0524 19:28:53.520655    2624 fix.go:207] guest clock: 1684956533.518743375
	I0524 19:28:53.520655    2624 fix.go:220] Guest: 2023-05-24 19:28:53.518743375 +0000 UTC Remote: 2023-05-24 19:28:51.5203548 +0000 UTC m=+213.316208301 (delta=1.998388575s)
	I0524 19:28:53.520655    2624 fix.go:191] guest clock delta is within tolerance: 1.998388575s
	I0524 19:28:53.520655    2624 start.go:83] releasing machines lock for "multinode-237000-m02", held for 1m9.8950575s
	I0524 19:28:53.521190    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:54.308215    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:54.308215    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:54.308290    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:55.379272    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:55.379272    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:55.382998    2624 out.go:177] * Found network options:
	I0524 19:28:55.387076    2624 out.go:177]   - NO_PROXY=172.27.130.107
	W0524 19:28:55.389370    2624 proxy.go:119] fail to check proxy env: Error ip not in block
	I0524 19:28:55.391507    2624 out.go:177]   - no_proxy=172.27.130.107
	W0524 19:28:55.394059    2624 proxy.go:119] fail to check proxy env: Error ip not in block
	W0524 19:28:55.395347    2624 proxy.go:119] fail to check proxy env: Error ip not in block
	I0524 19:28:55.397276    2624 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 19:28:55.397276    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:55.404256    2624 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0524 19:28:55.404256    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:28:56.191480    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:56.191759    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:56.191678    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:28:56.191759    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:56.191871    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:56.191871    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:28:57.366583    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:57.367106    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:57.367542    2624 sshutil.go:53] new ssh client: &{IP:172.27.128.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\id_rsa Username:docker}
	I0524 19:28:57.394406    2624 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:28:57.394406    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:28:57.394984    2624 sshutil.go:53] new ssh client: &{IP:172.27.128.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\id_rsa Username:docker}
	I0524 19:28:57.570984    2624 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0524 19:28:57.571069    2624 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1737938s)
	I0524 19:28:57.571177    2624 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0524 19:28:57.571247    2624 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (2.166992s)
	W0524 19:28:57.571360    2624 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 19:28:57.585283    2624 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 19:28:57.614226    2624 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0524 19:28:57.614226    2624 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 19:28:57.614226    2624 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:28:57.622149    2624 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 19:28:57.660209    2624 docker.go:633] Got preloaded images: 
	I0524 19:28:57.660279    2624 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0524 19:28:57.670680    2624 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 19:28:57.693785    2624 command_runner.go:139] > {"Repositories":{}}
	I0524 19:28:57.704119    2624 ssh_runner.go:195] Run: which lz4
	I0524 19:28:57.709976    2624 command_runner.go:130] > /usr/bin/lz4
	I0524 19:28:57.709976    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0524 19:28:57.719882    2624 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0524 19:28:57.725369    2624 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 19:28:57.725723    2624 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0524 19:28:57.725723    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (412256110 bytes)
	I0524 19:29:00.593009    2624 docker.go:597] Took 2.882240 seconds to copy over tarball
	I0524 19:29:00.604189    2624 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0524 19:29:10.123512    2624 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.5192208s)
	I0524 19:29:10.123567    2624 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0524 19:29:10.198502    2624 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0524 19:29:10.216007    2624 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.7-0":"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83":"sha256:86b6af7dd652c1b38118be1c338e
9354b33469e69a218f7e290a0ca5304ad681"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.27.2":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370","registry.k8s.io/kube-apiserver@sha256:94e48585629fde3c1d06c6ae8f62885d3052f12a1072ffd97611296525eff5b9":"sha256:c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.27.2":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12","registry.k8s.io/kube-controller-manager@sha256:b0990ef7c9ce9edd0f57355a7e4cb43a71e864bfd2cd55bc68e4998e00213b56":"sha256:ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.27.2":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","registry.k8s.io/kube-proxy@sha256:1e4f13f5f5c215813fb9c9c6f56da1c0354363f2a69bd12732658f79d585864f":"sha256:b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d
32174dc13e7dee"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.27.2":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","registry.k8s.io/kube-scheduler@sha256:89e8c591cc58bd0e8651dddee3de290399b1ae5ad14779afe84779083fe05177":"sha256:89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0524 19:29:10.216007    2624 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0524 19:29:10.254724    2624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:29:10.439096    2624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 19:29:12.653912    2624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.2148165s)
	I0524 19:29:12.653974    2624 start.go:481] detecting cgroup driver to use...
	I0524 19:29:12.653974    2624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:29:12.691863    2624 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0524 19:29:12.701729    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 19:29:12.728089    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 19:29:12.746528    2624 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 19:29:12.755504    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 19:29:12.784882    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:29:12.809010    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 19:29:12.835016    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:29:12.865633    2624 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 19:29:12.891523    2624 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 19:29:12.921470    2624 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 19:29:12.937578    2624 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0524 19:29:12.947316    2624 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 19:29:12.975915    2624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:29:13.142785    2624 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 19:29:13.174931    2624 start.go:481] detecting cgroup driver to use...
	I0524 19:29:13.187785    2624 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 19:29:13.207384    2624 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0524 19:29:13.207384    2624 command_runner.go:130] > [Unit]
	I0524 19:29:13.207384    2624 command_runner.go:130] > Description=Docker Application Container Engine
	I0524 19:29:13.207451    2624 command_runner.go:130] > Documentation=https://docs.docker.com
	I0524 19:29:13.207451    2624 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0524 19:29:13.207451    2624 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0524 19:29:13.207451    2624 command_runner.go:130] > StartLimitBurst=3
	I0524 19:29:13.207451    2624 command_runner.go:130] > StartLimitIntervalSec=60
	I0524 19:29:13.207451    2624 command_runner.go:130] > [Service]
	I0524 19:29:13.207451    2624 command_runner.go:130] > Type=notify
	I0524 19:29:13.207517    2624 command_runner.go:130] > Restart=on-failure
	I0524 19:29:13.207517    2624 command_runner.go:130] > Environment=NO_PROXY=172.27.130.107
	I0524 19:29:13.207553    2624 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0524 19:29:13.207553    2624 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0524 19:29:13.207553    2624 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0524 19:29:13.207553    2624 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0524 19:29:13.207553    2624 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0524 19:29:13.207553    2624 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0524 19:29:13.207553    2624 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0524 19:29:13.207553    2624 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0524 19:29:13.207553    2624 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0524 19:29:13.207553    2624 command_runner.go:130] > ExecStart=
	I0524 19:29:13.207553    2624 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0524 19:29:13.207553    2624 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0524 19:29:13.207553    2624 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0524 19:29:13.207553    2624 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0524 19:29:13.207553    2624 command_runner.go:130] > LimitNOFILE=infinity
	I0524 19:29:13.207553    2624 command_runner.go:130] > LimitNPROC=infinity
	I0524 19:29:13.207553    2624 command_runner.go:130] > LimitCORE=infinity
	I0524 19:29:13.207553    2624 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0524 19:29:13.207553    2624 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0524 19:29:13.207553    2624 command_runner.go:130] > TasksMax=infinity
	I0524 19:29:13.207553    2624 command_runner.go:130] > TimeoutStartSec=0
	I0524 19:29:13.207553    2624 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0524 19:29:13.207553    2624 command_runner.go:130] > Delegate=yes
	I0524 19:29:13.207553    2624 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0524 19:29:13.207553    2624 command_runner.go:130] > KillMode=process
	I0524 19:29:13.207553    2624 command_runner.go:130] > [Install]
	I0524 19:29:13.207553    2624 command_runner.go:130] > WantedBy=multi-user.target
	I0524 19:29:13.216927    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 19:29:13.250434    2624 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 19:29:13.289039    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 19:29:13.321673    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:29:13.356667    2624 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 19:29:13.419134    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:29:13.442528    2624 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:29:13.473714    2624 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0524 19:29:13.484078    2624 ssh_runner.go:195] Run: which cri-dockerd
	I0524 19:29:13.489893    2624 command_runner.go:130] > /usr/bin/cri-dockerd
	I0524 19:29:13.499659    2624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 19:29:13.516512    2624 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 19:29:13.558200    2624 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 19:29:13.732903    2624 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 19:29:13.910349    2624 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 19:29:13.910349    2624 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 19:29:13.951150    2624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:29:14.136153    2624 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 19:29:15.705336    2624 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5691837s)
	I0524 19:29:15.714339    2624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 19:29:15.887441    2624 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 19:29:16.069193    2624 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 19:29:16.252296    2624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:29:16.443673    2624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 19:29:16.482435    2624 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:29:16.677459    2624 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 19:29:16.789808    2624 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 19:29:16.798769    2624 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 19:29:16.807804    2624 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0524 19:29:16.807804    2624 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0524 19:29:16.807804    2624 command_runner.go:130] > Device: 16h/22d	Inode: 974         Links: 1
	I0524 19:29:16.807804    2624 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0524 19:29:16.807804    2624 command_runner.go:130] > Access: 2023-05-24 19:29:16.698654935 +0000
	I0524 19:29:16.807804    2624 command_runner.go:130] > Modify: 2023-05-24 19:29:16.698654935 +0000
	I0524 19:29:16.807804    2624 command_runner.go:130] > Change: 2023-05-24 19:29:16.702654900 +0000
	I0524 19:29:16.807804    2624 command_runner.go:130] >  Birth: -
	I0524 19:29:16.807804    2624 start.go:549] Will wait 60s for crictl version
	I0524 19:29:16.821410    2624 ssh_runner.go:195] Run: which crictl
	I0524 19:29:16.828339    2624 command_runner.go:130] > /usr/bin/crictl
	I0524 19:29:16.838025    2624 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 19:29:16.902939    2624 command_runner.go:130] > Version:  0.1.0
	I0524 19:29:16.903002    2624 command_runner.go:130] > RuntimeName:  docker
	I0524 19:29:16.903002    2624 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0524 19:29:16.903002    2624 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0524 19:29:16.903002    2624 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 19:29:16.910529    2624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 19:29:16.954116    2624 command_runner.go:130] > 20.10.23
	I0524 19:29:16.961604    2624 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 19:29:17.004758    2624 command_runner.go:130] > 20.10.23
	I0524 19:29:17.007772    2624 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 19:29:17.010764    2624 out.go:177]   - env NO_PROXY=172.27.130.107
	I0524 19:29:17.012844    2624 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 19:29:17.017782    2624 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 19:29:17.017782    2624 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 19:29:17.017782    2624 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 19:29:17.017782    2624 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 19:29:17.020795    2624 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 19:29:17.020795    2624 ip.go:210] interface addr: 172.27.128.1/20
	I0524 19:29:17.030795    2624 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 19:29:17.036927    2624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:29:17.061424    2624 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000 for IP: 172.27.128.127
	I0524 19:29:17.061495    2624 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:29:17.061841    2624 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 19:29:17.062718    2624 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 19:29:17.062718    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0524 19:29:17.063182    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0524 19:29:17.063361    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0524 19:29:17.063557    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0524 19:29:17.063896    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 19:29:17.063896    2624 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 19:29:17.063896    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 19:29:17.064693    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 19:29:17.064980    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 19:29:17.065330    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 19:29:17.065842    2624 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 19:29:17.066125    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> /usr/share/ca-certificates/65602.pem
	I0524 19:29:17.066297    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:29:17.066485    2624 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem -> /usr/share/ca-certificates/6560.pem
	I0524 19:29:17.067025    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 19:29:17.114060    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 19:29:17.155076    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 19:29:17.198410    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 19:29:17.239913    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 19:29:17.279725    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 19:29:17.319874    2624 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 19:29:17.368925    2624 ssh_runner.go:195] Run: openssl version
	I0524 19:29:17.380211    2624 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0524 19:29:17.390462    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 19:29:17.419470    2624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 19:29:17.426769    2624 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 19:29:17.426769    2624 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 19:29:17.436796    2624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 19:29:17.444152    2624 command_runner.go:130] > 3ec20f2e
	I0524 19:29:17.453176    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 19:29:17.483142    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 19:29:17.510581    2624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:29:17.517190    2624 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:29:17.517190    2624 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:29:17.526895    2624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:29:17.535093    2624 command_runner.go:130] > b5213941
	I0524 19:29:17.544953    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 19:29:17.572404    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 19:29:17.599331    2624 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 19:29:17.607487    2624 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 19:29:17.607622    2624 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 19:29:17.616593    2624 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 19:29:17.624817    2624 command_runner.go:130] > 51391683
	I0524 19:29:17.634488    2624 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 19:29:17.658868    2624 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 19:29:17.669253    2624 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 19:29:17.669337    2624 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 19:29:17.675885    2624 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 19:29:17.720355    2624 command_runner.go:130] > cgroupfs
	I0524 19:29:17.720465    2624 cni.go:84] Creating CNI manager for ""
	I0524 19:29:17.720550    2624 cni.go:136] 2 nodes found, recommending kindnet
	I0524 19:29:17.720550    2624 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 19:29:17.720629    2624 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.128.127 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-237000 NodeName:multinode-237000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.130.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.128.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 19:29:17.720878    2624 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.128.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-237000-m02"
	  kubeletExtraArgs:
	    node-ip: 172.27.128.127
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.130.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 19:29:17.720878    2624 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-237000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.128.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 19:29:17.731278    2624 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 19:29:17.749178    2624 command_runner.go:130] > kubeadm
	I0524 19:29:17.749178    2624 command_runner.go:130] > kubectl
	I0524 19:29:17.749178    2624 command_runner.go:130] > kubelet
	I0524 19:29:17.749178    2624 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 19:29:17.758700    2624 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0524 19:29:17.776109    2624 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I0524 19:29:17.806316    2624 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 19:29:17.850369    2624 ssh_runner.go:195] Run: grep 172.27.130.107	control-plane.minikube.internal$ /etc/hosts
	I0524 19:29:17.858322    2624 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.130.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:29:17.882934    2624 host.go:66] Checking if "multinode-237000" exists ...
	I0524 19:29:17.882934    2624 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:29:17.882934    2624 start.go:301] JoinCluster: &{Name:multinode-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.130.107 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.128.127 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:tr
ue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:29:17.883949    2624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0524 19:29:17.883949    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:29:18.649611    2624 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:29:18.649970    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:29:18.649970    2624 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:29:19.737297    2624 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:29:19.737297    2624 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:29:19.737649    2624 sshutil.go:53] new ssh client: &{IP:172.27.130.107 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:29:19.975997    2624 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token q7m9pt.evred1ophcxv0kqu --discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 
	I0524 19:29:19.975997    2624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0": (2.0920493s)
	I0524 19:29:19.975997    2624 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.27.128.127 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0524 19:29:19.975997    2624 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q7m9pt.evred1ophcxv0kqu --discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-237000-m02"
	I0524 19:29:20.232250    2624 command_runner.go:130] ! W0524 19:29:20.229777    1466 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0524 19:29:20.774565    2624 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 19:29:22.605131    2624 command_runner.go:130] > [preflight] Running pre-flight checks
	I0524 19:29:22.605131    2624 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0524 19:29:22.605247    2624 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0524 19:29:22.605247    2624 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 19:29:22.605247    2624 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 19:29:22.605247    2624 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0524 19:29:22.605247    2624 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0524 19:29:22.605247    2624 command_runner.go:130] > This node has joined the cluster:
	I0524 19:29:22.605247    2624 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0524 19:29:22.605247    2624 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0524 19:29:22.605247    2624 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0524 19:29:22.605247    2624 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token q7m9pt.evred1ophcxv0kqu --discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-237000-m02": (2.6292505s)
	I0524 19:29:22.605380    2624 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0524 19:29:23.040718    2624 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0524 19:29:23.040718    2624 start.go:303] JoinCluster complete in 5.1577861s
	I0524 19:29:23.040718    2624 cni.go:84] Creating CNI manager for ""
	I0524 19:29:23.040718    2624 cni.go:136] 2 nodes found, recommending kindnet
	I0524 19:29:23.050585    2624 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0524 19:29:23.060096    2624 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0524 19:29:23.060096    2624 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0524 19:29:23.060096    2624 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0524 19:29:23.060096    2624 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0524 19:29:23.060096    2624 command_runner.go:130] > Access: 2023-05-24 19:26:05.600064400 +0000
	I0524 19:29:23.060096    2624 command_runner.go:130] > Modify: 2023-05-20 04:10:39.000000000 +0000
	I0524 19:29:23.060096    2624 command_runner.go:130] > Change: 2023-05-24 19:25:55.818000000 +0000
	I0524 19:29:23.060096    2624 command_runner.go:130] >  Birth: -
	I0524 19:29:23.060096    2624 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0524 19:29:23.060096    2624 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0524 19:29:23.112374    2624 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0524 19:29:23.671068    2624 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0524 19:29:23.671068    2624 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0524 19:29:23.671068    2624 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0524 19:29:23.671068    2624 command_runner.go:130] > daemonset.apps/kindnet configured
	I0524 19:29:23.672466    2624 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:29:23.673150    2624 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.130.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:29:23.674433    2624 round_trippers.go:463] GET https://172.27.130.107:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0524 19:29:23.674433    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:23.674433    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:23.674531    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:23.694024    2624 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0524 19:29:23.694024    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:23.694024    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:23.694024    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:23.694024    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:23.694024    2624 round_trippers.go:580]     Content-Length: 291
	I0524 19:29:23.694024    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:23 GMT
	I0524 19:29:23.694024    2624 round_trippers.go:580]     Audit-Id: 4c214bc3-e26f-459a-bcf2-373c859d12ce
	I0524 19:29:23.694024    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:23.694024    2624 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9016559d-3c59-4f76-8961-1b5665cb8836","resourceVersion":"426","creationTimestamp":"2023-05-24T19:27:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0524 19:29:23.694024    2624 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-237000" context rescaled to 1 replicas
	I0524 19:29:23.694024    2624 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.27.128.127 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0524 19:29:23.697996    2624 out.go:177] * Verifying Kubernetes components...
	I0524 19:29:23.710089    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:29:23.743642    2624 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:29:23.744377    2624 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.130.107:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:29:23.745111    2624 node_ready.go:35] waiting up to 6m0s for node "multinode-237000-m02" to be "Ready" ...
	I0524 19:29:23.745739    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:23.745739    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:23.745739    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:23.745739    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:23.751834    2624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:29:23.751834    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:23.751834    2624 round_trippers.go:580]     Audit-Id: 5599ffd8-2059-4ecc-beec-28d24e8ef154
	I0524 19:29:23.751928    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:23.751928    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:23.751928    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:23.751928    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:23.751928    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:23 GMT
	I0524 19:29:23.752207    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"536","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kube-controller-manager","operation":"Update","apiVer [truncated 4193 chars]
	I0524 19:29:24.253427    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:24.253753    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:24.253753    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:24.253753    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:24.263585    2624 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0524 19:29:24.263893    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:24.263893    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:24.263893    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:24.263893    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:24.263893    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:24 GMT
	I0524 19:29:24.263986    2624 round_trippers.go:580]     Audit-Id: 3a606e8f-f975-4091-b898-68e30887c394
	I0524 19:29:24.263986    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:24.264204    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:24.761005    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:24.761129    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:24.761129    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:24.761129    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:24.767693    2624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0524 19:29:24.767693    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:24.767693    2624 round_trippers.go:580]     Audit-Id: 65c1fd89-015e-4a5b-b2c6-44dc5b6bc90e
	I0524 19:29:24.767693    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:24.768564    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:24.768564    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:24.768564    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:24.768564    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:24 GMT
	I0524 19:29:24.768780    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:25.264008    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:25.264008    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:25.264008    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:25.264008    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:25.269623    2624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:29:25.269623    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:25.269623    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:25 GMT
	I0524 19:29:25.269764    2624 round_trippers.go:580]     Audit-Id: 5fb75792-a93d-446c-9cc0-75d6a7ee4d32
	I0524 19:29:25.269764    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:25.269764    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:25.269764    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:25.269764    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:25.269903    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:25.766412    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:25.766520    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:25.766520    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:25.766520    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:25.774973    2624 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:29:25.774973    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:25.774973    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:25 GMT
	I0524 19:29:25.774973    2624 round_trippers.go:580]     Audit-Id: 7a90b454-33d3-4daf-950e-73aa8a4d0bfc
	I0524 19:29:25.774973    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:25.774973    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:25.774973    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:25.774973    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:25.774973    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:25.775780    2624 node_ready.go:58] node "multinode-237000-m02" has status "Ready":"False"
	I0524 19:29:26.254914    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:26.254914    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:26.254914    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:26.254914    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:26.258988    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:29:26.258988    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:26.259708    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:26.259708    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:26.259708    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:26.259708    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:26 GMT
	I0524 19:29:26.259708    2624 round_trippers.go:580]     Audit-Id: eb9fc0bb-eec0-492a-9504-d45cce24b015
	I0524 19:29:26.259708    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:26.260015    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:26.757386    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:26.757621    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:26.757621    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:26.757621    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:26.762140    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:29:26.762140    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:26.762140    2624 round_trippers.go:580]     Audit-Id: 089e5442-2240-400c-b8fb-36aeac3fb8f0
	I0524 19:29:26.762140    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:26.762140    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:26.762140    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:26.762140    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:26.762140    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:26 GMT
	I0524 19:29:26.762140    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:27.257600    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:27.257600    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:27.257600    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:27.257600    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:27.262700    2624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:29:27.262700    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:27.262700    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:27.262700    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:27.263677    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:27.263677    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:27.263740    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:27 GMT
	I0524 19:29:27.263740    2624 round_trippers.go:580]     Audit-Id: 8df763c7-20bd-40d9-b5ed-7ebe8cf135a0
	I0524 19:29:27.264027    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:27.759530    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:27.759604    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:27.759604    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:27.759604    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:27.763618    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:27.763955    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:27.763955    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:27 GMT
	I0524 19:29:27.763955    2624 round_trippers.go:580]     Audit-Id: f51fdd3e-6cb0-4695-819e-3e9fa36f748f
	I0524 19:29:27.763955    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:27.763955    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:27.763955    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:27.764027    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:27.764179    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:28.265902    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:28.265902    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:28.265902    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:28.265902    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:28.269494    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:28.269494    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:28.269494    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:28 GMT
	I0524 19:29:28.269494    2624 round_trippers.go:580]     Audit-Id: ff7a4cb6-30cb-4383-aad8-f985c65140b8
	I0524 19:29:28.269494    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:28.269494    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:28.269494    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:28.269494    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:28.270223    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:28.270223    2624 node_ready.go:58] node "multinode-237000-m02" has status "Ready":"False"
	I0524 19:29:28.760563    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:28.760563    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:28.760563    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:28.760563    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:28.764143    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:28.764143    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:28.764143    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:28.764143    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:28.764143    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:28.764823    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:28 GMT
	I0524 19:29:28.764823    2624 round_trippers.go:580]     Audit-Id: 91058d7c-a71b-421d-933e-2349069ce148
	I0524 19:29:28.764823    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:28.765037    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:29.262205    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:29.262205    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:29.262205    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:29.262205    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:29.265779    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:29.266151    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:29.266227    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:29.266227    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:29 GMT
	I0524 19:29:29.266227    2624 round_trippers.go:580]     Audit-Id: dec33ba8-7cd0-4682-9407-42e39f97204a
	I0524 19:29:29.266227    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:29.266227    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:29.266227    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:29.266615    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:29.758977    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:29.758977    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:29.758977    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:29.758977    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:29.763593    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:29:29.764082    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:29.764082    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:29.764132    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:29.764132    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:29.764164    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:29.764164    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:29 GMT
	I0524 19:29:29.764164    2624 round_trippers.go:580]     Audit-Id: 1295c7fd-bbb9-41cd-8881-930beb9b37b0
	I0524 19:29:29.764164    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:30.261121    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:30.261121    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:30.261121    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:30.261121    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:30.267509    2624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:29:30.267509    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:30.267509    2624 round_trippers.go:580]     Audit-Id: edb9de19-b86c-4d2f-84ed-9388bb61d7b6
	I0524 19:29:30.267509    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:30.267509    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:30.267509    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:30.267509    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:30.267509    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:30 GMT
	I0524 19:29:30.267509    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:30.767548    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:30.767548    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:30.767548    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:30.767548    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:30.771525    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:30.772532    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:30.772532    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:30.772532    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:30 GMT
	I0524 19:29:30.772532    2624 round_trippers.go:580]     Audit-Id: a2f76ce6-b724-4b1a-80d8-45f0ffca5331
	I0524 19:29:30.772532    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:30.772532    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:30.772532    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:30.772532    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:30.774002    2624 node_ready.go:58] node "multinode-237000-m02" has status "Ready":"False"
	I0524 19:29:31.258993    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:31.258993    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:31.258993    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:31.258993    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:31.534299    2624 round_trippers.go:574] Response Status: 200 OK in 275 milliseconds
	I0524 19:29:31.534299    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:31.534299    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:31.534299    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:31 GMT
	I0524 19:29:31.534299    2624 round_trippers.go:580]     Audit-Id: 68d0ca51-65c4-40c3-b2f6-82863f532cec
	I0524 19:29:31.534299    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:31.534299    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:31.534299    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:31.535188    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:31.760706    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:31.760799    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:31.760799    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:31.760799    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:31.764223    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:31.764735    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:31.764735    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:31.764735    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:31.764735    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:31 GMT
	I0524 19:29:31.764735    2624 round_trippers.go:580]     Audit-Id: 632e3fc1-187b-4447-a8e9-3f8c90493526
	I0524 19:29:31.764735    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:31.764735    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:31.764735    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:32.264926    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:32.265001    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:32.265001    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:32.265001    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:32.268385    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:32.268385    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:32.268385    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:32 GMT
	I0524 19:29:32.268385    2624 round_trippers.go:580]     Audit-Id: bcee3a77-02b3-4597-86d9-105f2bbe4cf5
	I0524 19:29:32.268385    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:32.268385    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:32.269181    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:32.269181    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:32.269308    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"541","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4302 chars]
	I0524 19:29:32.753458    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:32.753458    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:32.753647    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:32.753647    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:32.758076    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:29:32.758563    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:32.758563    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:32.758563    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:32.758563    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:32.758563    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:32 GMT
	I0524 19:29:32.758563    2624 round_trippers.go:580]     Audit-Id: d3c61828-3045-446f-b610-01b7ebbf6331
	I0524 19:29:32.758563    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:32.759024    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"562","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4268 chars]
	I0524 19:29:33.265660    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:33.265773    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:33.265773    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:33.265857    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:33.273516    2624 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:29:33.273516    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:33.273575    2624 round_trippers.go:580]     Audit-Id: df89dcc8-dfa0-4951-8fac-d705ee879244
	I0524 19:29:33.273575    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:33.273575    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:33.273575    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:33.273575    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:33.273575    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:33 GMT
	I0524 19:29:33.273575    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"562","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4268 chars]
	I0524 19:29:33.274257    2624 node_ready.go:58] node "multinode-237000-m02" has status "Ready":"False"
	I0524 19:29:33.766712    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:33.766965    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:33.766965    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:33.766965    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:33.773883    2624 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0524 19:29:33.773883    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:33.773883    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:33.773883    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:33.773883    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:33 GMT
	I0524 19:29:33.773883    2624 round_trippers.go:580]     Audit-Id: ecc16b68-7db6-4a0d-aed7-da8e2a612670
	I0524 19:29:33.773883    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:33.773883    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:33.774893    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"562","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4268 chars]
	I0524 19:29:34.267436    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:34.267535    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:34.267604    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:34.267604    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:34.271499    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:34.272323    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:34.272323    2624 round_trippers.go:580]     Audit-Id: 5e5606b3-38a0-4e75-b357-7c8bc7f483af
	I0524 19:29:34.272323    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:34.272323    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:34.272323    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:34.272323    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:34.272323    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:34 GMT
	I0524 19:29:34.272710    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"562","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4268 chars]
	I0524 19:29:34.768056    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:34.768056    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:34.768657    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:34.768657    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:34.772975    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:29:34.772975    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:34.772975    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:34.773208    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:34.773208    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:34.773208    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:34 GMT
	I0524 19:29:34.773208    2624 round_trippers.go:580]     Audit-Id: 3e001546-2a19-4db6-a1ea-f5fcd9459d3c
	I0524 19:29:34.773313    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:34.773482    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"562","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4268 chars]
	I0524 19:29:35.267118    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:35.267118    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:35.267118    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:35.267118    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:35.270888    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:35.270888    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:35.271461    2624 round_trippers.go:580]     Audit-Id: 038deccb-971d-4e9b-b76d-be83a5e0521c
	I0524 19:29:35.271461    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:35.271461    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:35.271461    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:35.271461    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:35.271537    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:35 GMT
	I0524 19:29:35.271819    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"562","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4268 chars]
	I0524 19:29:35.758271    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:35.758271    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:35.758271    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:35.758271    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:35.771887    2624 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0524 19:29:35.771887    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:35.771887    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:35 GMT
	I0524 19:29:35.771887    2624 round_trippers.go:580]     Audit-Id: dcb2ea26-dc78-47eb-b7cb-edf6af7eea61
	I0524 19:29:35.771887    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:35.771887    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:35.771887    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:35.771887    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:35.771887    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"562","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4268 chars]
	I0524 19:29:35.771887    2624 node_ready.go:58] node "multinode-237000-m02" has status "Ready":"False"
	I0524 19:29:36.253226    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:36.253315    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:36.253315    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:36.253382    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:36.256721    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:36.256721    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:36.256935    2624 round_trippers.go:580]     Audit-Id: 018a0023-255d-403c-afc2-249ca0e9dfa4
	I0524 19:29:36.256935    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:36.256935    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:36.256935    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:36.256935    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:36.256935    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:36 GMT
	I0524 19:29:36.257255    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"562","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4268 chars]
	I0524 19:29:36.765734    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:36.765883    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:36.765883    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:36.765883    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:36.771257    2624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:29:36.771350    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:36.771350    2624 round_trippers.go:580]     Audit-Id: cb4d3cd1-eaf6-4bad-b018-7a896310295d
	I0524 19:29:36.771350    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:36.771350    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:36.771350    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:36.771350    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:36.771350    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:36 GMT
	I0524 19:29:36.771350    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"562","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4268 chars]
	I0524 19:29:37.265851    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:37.265950    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.265950    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.265950    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.269891    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:37.270882    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.270934    2624 round_trippers.go:580]     Audit-Id: e715447d-9efb-47d7-afa7-c1faa1d2a89a
	I0524 19:29:37.270934    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.270934    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.270934    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.270934    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.270934    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.270934    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"568","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0524 19:29:37.271595    2624 node_ready.go:49] node "multinode-237000-m02" has status "Ready":"True"
	I0524 19:29:37.271595    2624 node_ready.go:38] duration metric: took 13.5259564s waiting for node "multinode-237000-m02" to be "Ready" ...
	I0524 19:29:37.271595    2624 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:29:37.276021    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods
	I0524 19:29:37.276021    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.276021    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.276021    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.281616    2624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:29:37.281616    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.281616    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.281616    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.282516    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.282516    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.282516    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.282516    2624 round_trippers.go:580]     Audit-Id: 9fbe46b1-9c9f-415d-8744-c5fbc0be583e
	I0524 19:29:37.282932    2624 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"568"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"422","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67520 chars]
	I0524 19:29:37.287207    2624 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.287207    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:29:37.287380    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.287380    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.287380    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.290678    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:37.290678    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.290678    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.290678    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.291009    2624 round_trippers.go:580]     Audit-Id: fe98a8d9-eecf-4e36-9263-471f19b28a55
	I0524 19:29:37.291009    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.291009    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.291009    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.291261    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"422","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6284 chars]
	I0524 19:29:37.291862    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:29:37.291862    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.291927    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.291927    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.300224    2624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:29:37.300435    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.300435    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.300435    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.300435    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.300435    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.300435    2624 round_trippers.go:580]     Audit-Id: bd3c03d5-9777-406d-9616-7d88e84800c3
	I0524 19:29:37.300435    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.300647    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"428","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4961 chars]
	I0524 19:29:37.300868    2624 pod_ready.go:92] pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace has status "Ready":"True"
	I0524 19:29:37.300868    2624 pod_ready.go:81] duration metric: took 13.6612ms waiting for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.300868    2624 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.300868    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-237000
	I0524 19:29:37.300868    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.300868    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.300868    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.304598    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:37.304598    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.304598    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.304598    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.304598    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.304598    2624 round_trippers.go:580]     Audit-Id: 0146cd97-1fe9-4f1a-9f2d-7b7d3bf52832
	I0524 19:29:37.304598    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.304598    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.305207    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-237000","namespace":"kube-system","uid":"981422ac-e671-44a5-9ad2-b1d9e5ff7133","resourceVersion":"389","creationTimestamp":"2023-05-24T19:27:12Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.130.107:2379","kubernetes.io/config.hash":"b50925fc64d689df6b7c835d5181c1ec","kubernetes.io/config.mirror":"b50925fc64d689df6b7c835d5181c1ec","kubernetes.io/config.seen":"2023-05-24T19:27:12.143962733Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5872 chars]
	I0524 19:29:37.305814    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:29:37.305814    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.305814    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.305814    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.309440    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:37.309440    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.309440    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.309440    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.309440    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.309440    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.309440    2624 round_trippers.go:580]     Audit-Id: a47421b4-be48-46c3-8556-389a1a761347
	I0524 19:29:37.309440    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.310436    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"428","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4961 chars]
	I0524 19:29:37.310436    2624 pod_ready.go:92] pod "etcd-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:29:37.310436    2624 pod_ready.go:81] duration metric: took 9.5681ms waiting for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.310436    2624 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.310436    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-237000
	I0524 19:29:37.310436    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.310436    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.310436    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.314455    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:29:37.314455    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.314455    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.314455    2624 round_trippers.go:580]     Audit-Id: 785d5333-1bc3-4898-bd1a-7c449b9d29af
	I0524 19:29:37.314455    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.314455    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.314455    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.314455    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.314834    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-237000","namespace":"kube-system","uid":"a516131e-ab1a-41f9-95ca-cbfb556e1380","resourceVersion":"390","creationTimestamp":"2023-05-24T19:27:11Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.130.107:8443","kubernetes.io/config.hash":"9df549a886a8b8feca4108c5fa576f3b","kubernetes.io/config.mirror":"9df549a886a8b8feca4108c5fa576f3b","kubernetes.io/config.seen":"2023-05-24T19:27:00.264374544Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:11Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7408 chars]
	I0524 19:29:37.315531    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:29:37.315531    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.315531    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.315531    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.320778    2624 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:29:37.320778    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.320778    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.320778    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.320778    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.320778    2624 round_trippers.go:580]     Audit-Id: a034dc75-d277-47d5-aea3-d33665c280d4
	I0524 19:29:37.320778    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.320778    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.320778    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"428","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4961 chars]
	I0524 19:29:37.320778    2624 pod_ready.go:92] pod "kube-apiserver-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:29:37.320778    2624 pod_ready.go:81] duration metric: took 10.3421ms waiting for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.321867    2624 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.321867    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-237000
	I0524 19:29:37.322015    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.322015    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.322066    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.325819    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:37.325819    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.325819    2624 round_trippers.go:580]     Audit-Id: b6a8cff1-59de-40ca-9a41-b9edbb04897a
	I0524 19:29:37.325819    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.325819    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.326560    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.326560    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.326560    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.326738    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-237000","namespace":"kube-system","uid":"1ff7b570-afe4-4076-989f-d0377d04f9d5","resourceVersion":"387","creationTimestamp":"2023-05-24T19:27:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.mirror":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.seen":"2023-05-24T19:27:00.264375644Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6973 chars]
	I0524 19:29:37.327372    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:29:37.327372    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.327372    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.327372    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.331969    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:29:37.331969    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.331969    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.331969    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.331969    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.331969    2624 round_trippers.go:580]     Audit-Id: b5f7f9fa-386e-4e5a-b102-308728f55323
	I0524 19:29:37.331969    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.331969    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.332958    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"428","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4961 chars]
	I0524 19:29:37.333408    2624 pod_ready.go:92] pod "kube-controller-manager-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:29:37.333408    2624 pod_ready.go:81] duration metric: took 11.5406ms waiting for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.333473    2624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.468239    2624 request.go:628] Waited for 134.5365ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:29:37.468485    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:29:37.468485    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.468485    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.468545    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.471806    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:37.472626    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.472626    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.472626    2624 round_trippers.go:580]     Audit-Id: 30060225-5d29-4707-b99e-c22db7803677
	I0524 19:29:37.472626    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.472626    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.472753    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.472753    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.472964    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r6f94","generateName":"kube-proxy-","namespace":"kube-system","uid":"90a232cf-33b3-4e3b-82bf-9050d39109d1","resourceVersion":"385","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5535 chars]
	I0524 19:29:37.670100    2624 request.go:628] Waited for 196.3047ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:29:37.670358    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:29:37.670358    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.670358    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.670425    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.674885    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:29:37.674885    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.674885    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.674885    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.674885    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.674991    2624 round_trippers.go:580]     Audit-Id: 16d6d630-70e6-485d-b8ae-7d21d31fa186
	I0524 19:29:37.674991    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.674991    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.675391    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"428","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4961 chars]
	I0524 19:29:37.675879    2624 pod_ready.go:92] pod "kube-proxy-r6f94" in "kube-system" namespace has status "Ready":"True"
	I0524 19:29:37.675983    2624 pod_ready.go:81] duration metric: took 342.5097ms waiting for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.675983    2624 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zglzj" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:37.875725    2624 request.go:628] Waited for 199.4035ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zglzj
	I0524 19:29:37.875725    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zglzj
	I0524 19:29:37.875725    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:37.875725    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:37.875974    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:37.879273    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:37.879273    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:37.879273    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:37.879273    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:37.879273    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:37.879789    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:37 GMT
	I0524 19:29:37.879789    2624 round_trippers.go:580]     Audit-Id: b4a0b8e1-42bf-4efd-9d11-6c38f34bf4ee
	I0524 19:29:37.879789    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:37.880075    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zglzj","generateName":"kube-proxy-","namespace":"kube-system","uid":"af1fb911-5877-4bcc-92f4-5571f489122c","resourceVersion":"550","creationTimestamp":"2023-05-24T19:29:22Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5543 chars]
	I0524 19:29:38.081113    2624 request.go:628] Waited for 200.1957ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:38.081178    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:29:38.081178    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:38.081340    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:38.081340    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:38.089703    2624 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:29:38.089703    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:38.089703    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:38.089703    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:38 GMT
	I0524 19:29:38.089703    2624 round_trippers.go:580]     Audit-Id: f620eaf2-a992-44bd-8369-5d4f214cd1df
	I0524 19:29:38.089703    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:38.089703    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:38.089703    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:38.089703    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"568","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0524 19:29:38.091465    2624 pod_ready.go:92] pod "kube-proxy-zglzj" in "kube-system" namespace has status "Ready":"True"
	I0524 19:29:38.091465    2624 pod_ready.go:81] duration metric: took 415.4826ms waiting for pod "kube-proxy-zglzj" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:38.091465    2624 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:38.266928    2624 request.go:628] Waited for 175.296ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:29:38.267020    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:29:38.267113    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:38.267147    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:38.267184    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:38.270949    2624 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:29:38.270949    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:38.271625    2624 round_trippers.go:580]     Audit-Id: 1ec2794f-44aa-44d7-bb5f-4c35f16b3a6a
	I0524 19:29:38.271625    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:38.271625    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:38.271725    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:38.271725    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:38.271794    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:38 GMT
	I0524 19:29:38.272060    2624 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-237000","namespace":"kube-system","uid":"a55c419f-1b04-4895-9fd5-02dd67cd888f","resourceVersion":"388","creationTimestamp":"2023-05-24T19:27:12Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.mirror":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.seen":"2023-05-24T19:27:12.143961333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4703 chars]
	I0524 19:29:38.469783    2624 request.go:628] Waited for 196.7399ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:29:38.469783    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes/multinode-237000
	I0524 19:29:38.469783    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:38.469783    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:38.469783    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:38.474413    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:29:38.474456    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:38.474456    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:38.474456    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:38.474456    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:38.474456    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:38.474456    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:38 GMT
	I0524 19:29:38.474456    2624 round_trippers.go:580]     Audit-Id: 8650b3ff-a738-4d9f-aaf1-956f241c7901
	I0524 19:29:38.474456    2624 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"428","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","fi [truncated 4961 chars]
	I0524 19:29:38.475213    2624 pod_ready.go:92] pod "kube-scheduler-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:29:38.475264    2624 pod_ready.go:81] duration metric: took 383.799ms waiting for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:29:38.475317    2624 pod_ready.go:38] duration metric: took 1.2037228s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:29:38.475383    2624 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 19:29:38.485481    2624 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:29:38.509091    2624 system_svc.go:56] duration metric: took 33.7078ms WaitForService to wait for kubelet.
	I0524 19:29:38.509127    2624 kubeadm.go:581] duration metric: took 14.8151097s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 19:29:38.509127    2624 node_conditions.go:102] verifying NodePressure condition ...
	I0524 19:29:38.672025    2624 request.go:628] Waited for 162.6992ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.130.107:8443/api/v1/nodes
	I0524 19:29:38.672568    2624 round_trippers.go:463] GET https://172.27.130.107:8443/api/v1/nodes
	I0524 19:29:38.672568    2624 round_trippers.go:469] Request Headers:
	I0524 19:29:38.672568    2624 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:29:38.672568    2624 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:29:38.676924    2624 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:29:38.676924    2624 round_trippers.go:577] Response Headers:
	I0524 19:29:38.676924    2624 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:29:38.676924    2624 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:29:38.676924    2624 round_trippers.go:580]     Date: Wed, 24 May 2023 19:29:38 GMT
	I0524 19:29:38.676924    2624 round_trippers.go:580]     Audit-Id: cc7cc317-866d-4c76-888d-629c7f3fc0b9
	I0524 19:29:38.676924    2624 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:29:38.676924    2624 round_trippers.go:580]     Content-Type: application/json
	I0524 19:29:38.677449    2624 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"569"},"items":[{"metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"428","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10140 chars]
	I0524 19:29:38.678552    2624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:29:38.678612    2624 node_conditions.go:123] node cpu capacity is 2
	I0524 19:29:38.678612    2624 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:29:38.678691    2624 node_conditions.go:123] node cpu capacity is 2
	I0524 19:29:38.678691    2624 node_conditions.go:105] duration metric: took 169.5637ms to run NodePressure ...
	I0524 19:29:38.678691    2624 start.go:228] waiting for startup goroutines ...
	I0524 19:29:38.678691    2624 start.go:242] writing updated cluster config ...
	I0524 19:29:38.690000    2624 ssh_runner.go:195] Run: rm -f paused
	I0524 19:29:38.875077    2624 start.go:568] kubectl: 1.18.2, cluster: 1.27.2 (minor skew: 9)
	I0524 19:29:38.877802    2624 out.go:177] 
	W0524 19:29:38.880492    2624 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 19:29:38.883398    2624 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 19:29:38.887521    2624 out.go:177] * Done! kubectl is now configured to use "multinode-237000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 19:25:57 UTC, ends at Wed 2023-05-24 19:30:29 UTC. --
	May 24 19:27:39 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:39.002165544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:27:39 multinode-237000 cri-dockerd[1335]: time="2023-05-24T19:27:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8f0c4af1fb5a2ffddfd6f583be83494f233455997e02e23cca5f7ed1c1c09455/resolv.conf as [nameserver 172.27.128.1]"
	May 24 19:27:39 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:39.633310929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:27:39 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:39.633394228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:27:39 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:39.633609225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:27:39 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:39.633628225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:27:40 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:40.502085470Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:27:40 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:40.502303968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:27:40 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:40.502364167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:27:40 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:40.502454366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:27:41 multinode-237000 cri-dockerd[1335]: time="2023-05-24T19:27:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7975ebab5fd50f72baed98d4f8516871ca5e3312e011c102c0b0b05fe4899c4f/resolv.conf as [nameserver 172.27.128.1]"
	May 24 19:27:41 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:41.320890784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:27:41 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:41.321091682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:27:41 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:41.321110481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:27:41 multinode-237000 dockerd[1147]: time="2023-05-24T19:27:41.321182281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:29:50 multinode-237000 dockerd[1147]: time="2023-05-24T19:29:50.203119620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:29:50 multinode-237000 dockerd[1147]: time="2023-05-24T19:29:50.203518323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:29:50 multinode-237000 dockerd[1147]: time="2023-05-24T19:29:50.204365429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:29:50 multinode-237000 dockerd[1147]: time="2023-05-24T19:29:50.204544830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:29:50 multinode-237000 cri-dockerd[1335]: time="2023-05-24T19:29:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/44b4e40976026b04324bb77bb9f5c5b4c435c1d35fcac7028f5f4bfed8ca071f/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 24 19:29:51 multinode-237000 cri-dockerd[1335]: time="2023-05-24T19:29:51Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 24 19:29:52 multinode-237000 dockerd[1147]: time="2023-05-24T19:29:52.044680750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:29:52 multinode-237000 dockerd[1147]: time="2023-05-24T19:29:52.044784450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:29:52 multinode-237000 dockerd[1147]: time="2023-05-24T19:29:52.044832351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:29:52 multinode-237000 dockerd[1147]: time="2023-05-24T19:29:52.044849151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	914b54caf4688       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   38 seconds ago      Running             busybox                   0                   44b4e40976026
	0be0b91d64125       ead0a4a53df89                                                                                         2 minutes ago       Running             coredns                   0                   7975ebab5fd50
	8b4ccab3df53d       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       0                   8f0c4af1fb5a2
	a5f82b77134ca       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974              2 minutes ago       Running             kindnet-cni               0                   0dcfd5ea3653b
	9e3f0057f97c2       b8aa50768fd67                                                                                         3 minutes ago       Running             kube-proxy                0                   eca8b08a45760
	7589cfe30be6d       86b6af7dd652c                                                                                         3 minutes ago       Running             etcd                      0                   1f6b2c280e52b
	bde0fe1b24588       89e70da428d29                                                                                         3 minutes ago       Running             kube-scheduler            0                   4d0c225625eb3
	c29b9004260c0       ac2b7465ebba9                                                                                         3 minutes ago       Running             kube-controller-manager   0                   0c8db54a682ad
	30b43ae6055b8       c5b13e4f7806d                                                                                         3 minutes ago       Running             kube-apiserver            0                   a31c29e9f7981
	
	* 
	* ==> coredns [0be0b91d6412] <==
	* [INFO] 10.244.0.3:46498 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000206401s
	[INFO] 10.244.1.2:59815 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001078s
	[INFO] 10.244.1.2:54283 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137801s
	[INFO] 10.244.1.2:52452 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174501s
	[INFO] 10.244.1.2:43678 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049501s
	[INFO] 10.244.1.2:40107 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000109301s
	[INFO] 10.244.1.2:38099 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000673s
	[INFO] 10.244.1.2:54570 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056101s
	[INFO] 10.244.1.2:48943 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059s
	[INFO] 10.244.0.3:51704 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000841s
	[INFO] 10.244.0.3:59397 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056801s
	[INFO] 10.244.0.3:36461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000493s
	[INFO] 10.244.0.3:52950 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052s
	[INFO] 10.244.1.2:56683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155301s
	[INFO] 10.244.1.2:38488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000898s
	[INFO] 10.244.1.2:56116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060901s
	[INFO] 10.244.1.2:42911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177601s
	[INFO] 10.244.0.3:39964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001114s
	[INFO] 10.244.0.3:35026 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000286702s
	[INFO] 10.244.0.3:57544 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000221901s
	[INFO] 10.244.0.3:47379 - 5 "PTR IN 1.128.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000443903s
	[INFO] 10.244.1.2:41971 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125701s
	[INFO] 10.244.1.2:34141 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128201s
	[INFO] 10.244.1.2:52982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000613s
	[INFO] 10.244.1.2:58165 - 5 "PTR IN 1.128.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000549s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-237000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-237000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=multinode-237000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T19_27_13_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 19:27:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-237000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:30:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:30:16 +0000   Wed, 24 May 2023 19:27:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:30:16 +0000   Wed, 24 May 2023 19:27:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:30:16 +0000   Wed, 24 May 2023 19:27:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:30:16 +0000   Wed, 24 May 2023 19:27:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.130.107
	  Hostname:    multinode-237000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 daadfbef686749ec86f553b234da3a08
	  System UUID:                a1fd074e-9d37-804e-9507-e627f053ff31
	  Boot ID:                    c39bf08c-a1b8-48f1-8631-aa31a5354e08
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-9t5bp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 coredns-5d78c9869d-qhx48                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m5s
	  kube-system                 etcd-multinode-237000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m17s
	  kube-system                 kindnet-xgkpb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m5s
	  kube-system                 kube-apiserver-multinode-237000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m18s
	  kube-system                 kube-controller-manager-multinode-237000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m19s
	  kube-system                 kube-proxy-r6f94                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m5s
	  kube-system                 kube-scheduler-multinode-237000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m3s   kube-proxy       
	  Normal  Starting                 3m17s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m17s  kubelet          Node multinode-237000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m17s  kubelet          Node multinode-237000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m17s  kubelet          Node multinode-237000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m6s   node-controller  Node multinode-237000 event: Registered Node multinode-237000 in Controller
	  Normal  NodeReady                2m51s  kubelet          Node multinode-237000 status is now: NodeReady
	
	
	Name:               multinode-237000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-237000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 19:29:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-237000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:30:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:29:52 +0000   Wed, 24 May 2023 19:29:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:29:52 +0000   Wed, 24 May 2023 19:29:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:29:52 +0000   Wed, 24 May 2023 19:29:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:29:52 +0000   Wed, 24 May 2023 19:29:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.128.127
	  Hostname:    multinode-237000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 740638efcc1749c2b9b0bbce077edce4
	  System UUID:                d6e2dfd5-eaf1-6e40-9a4d-231923fae672
	  Boot ID:                    8473675d-78db-44c9-a162-46ee8843b228
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-tdzj2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	  kube-system                 kindnet-9g7mc              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      67s
	  kube-system                 kube-proxy-zglzj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 65s                kube-proxy       
	  Normal  Starting                 68s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x2 over 68s)  kubelet          Node multinode-237000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x2 over 68s)  kubelet          Node multinode-237000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x2 over 68s)  kubelet          Node multinode-237000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  68s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           66s                node-controller  Node multinode-237000-m02 event: Registered Node multinode-237000-m02 in Controller
	  Normal  NodeReady                53s                kubelet          Node multinode-237000-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +1.139137] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.731507] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.168607] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[May24 19:26] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000110] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000003] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +17.076240] systemd-fstab-generator[650]: Ignoring "noauto" for root device
	[  +0.185785] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[ +22.629824] systemd-fstab-generator[905]: Ignoring "noauto" for root device
	[  +2.504418] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.645834] systemd-fstab-generator[1068]: Ignoring "noauto" for root device
	[  +0.611588] systemd-fstab-generator[1108]: Ignoring "noauto" for root device
	[  +0.154248] systemd-fstab-generator[1119]: Ignoring "noauto" for root device
	[  +0.207447] systemd-fstab-generator[1132]: Ignoring "noauto" for root device
	[  +1.721257] systemd-fstab-generator[1280]: Ignoring "noauto" for root device
	[  +0.201765] systemd-fstab-generator[1291]: Ignoring "noauto" for root device
	[  +0.181002] systemd-fstab-generator[1302]: Ignoring "noauto" for root device
	[  +0.178298] systemd-fstab-generator[1313]: Ignoring "noauto" for root device
	[  +0.238589] systemd-fstab-generator[1327]: Ignoring "noauto" for root device
	[  +6.531334] systemd-fstab-generator[1588]: Ignoring "noauto" for root device
	[  +0.633330] kauditd_printk_skb: 68 callbacks suppressed
	[May24 19:27] hrtimer: interrupt took 2094890 ns
	[  +8.256646] systemd-fstab-generator[2618]: Ignoring "noauto" for root device
	[ +23.532859] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [7589cfe30be6] <==
	* {"level":"info","ts":"2023-05-24T19:27:48.085Z","caller":"traceutil/trace.go:171","msg":"trace[1658236597] transaction","detail":"{read_only:false; response_revision:431; number_of_response:1; }","duration":"130.094165ms","start":"2023-05-24T19:27:47.955Z","end":"2023-05-24T19:27:48.085Z","steps":["trace[1658236597] 'process raft request'  (duration: 129.914167ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-24T19:29:03.039Z","caller":"traceutil/trace.go:171","msg":"trace[1736973215] transaction","detail":"{read_only:false; response_revision:491; number_of_response:1; }","duration":"106.019686ms","start":"2023-05-24T19:29:02.933Z","end":"2023-05-24T19:29:03.039Z","steps":["trace[1736973215] 'process raft request'  (duration: 105.811986ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T19:29:03.594Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.5031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-05-24T19:29:03.594Z","caller":"traceutil/trace.go:171","msg":"trace[709934096] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:491; }","duration":"270.6332ms","start":"2023-05-24T19:29:03.323Z","end":"2023-05-24T19:29:03.594Z","steps":["trace[709934096] 'range keys from in-memory index tree'  (duration: 270.174501ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-24T19:29:04.994Z","caller":"traceutil/trace.go:171","msg":"trace[632257647] transaction","detail":"{read_only:false; response_revision:492; number_of_response:1; }","duration":"180.757069ms","start":"2023-05-24T19:29:04.813Z","end":"2023-05-24T19:29:04.994Z","steps":["trace[632257647] 'process raft request'  (duration: 180.44917ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T19:29:05.191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.145374ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1116"}
	{"level":"info","ts":"2023-05-24T19:29:05.191Z","caller":"traceutil/trace.go:171","msg":"trace[1795229840] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:492; }","duration":"146.433173ms","start":"2023-05-24T19:29:05.045Z","end":"2023-05-24T19:29:05.191Z","steps":["trace[1795229840] 'range keys from in-memory index tree'  (duration: 146.026174ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-24T19:29:05.394Z","caller":"traceutil/trace.go:171","msg":"trace[2110888372] transaction","detail":"{read_only:false; response_revision:493; number_of_response:1; }","duration":"197.052625ms","start":"2023-05-24T19:29:05.197Z","end":"2023-05-24T19:29:05.394Z","steps":["trace[2110888372] 'process raft request'  (duration: 196.950026ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T19:29:05.837Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.74556ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2458270845482180469 > lease_revoke:<id:221d884f3aaa1f31>","response":"size:27"}
	{"level":"info","ts":"2023-05-24T19:29:07.511Z","caller":"traceutil/trace.go:171","msg":"trace[1866771101] transaction","detail":"{read_only:false; response_revision:494; number_of_response:1; }","duration":"106.294094ms","start":"2023-05-24T19:29:07.405Z","end":"2023-05-24T19:29:07.511Z","steps":["trace[1866771101] 'process raft request'  (duration: 106.036395ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-24T19:29:09.743Z","caller":"traceutil/trace.go:171","msg":"trace[1759676360] linearizableReadLoop","detail":"{readStateIndex:528; appliedIndex:527; }","duration":"196.737241ms","start":"2023-05-24T19:29:09.546Z","end":"2023-05-24T19:29:09.743Z","steps":["trace[1759676360] 'read index received'  (duration: 196.474241ms)","trace[1759676360] 'applied index is now lower than readState.Index'  (duration: 262.3µs)"],"step_count":2}
	{"level":"warn","ts":"2023-05-24T19:29:09.744Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.184139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-05-24T19:29:09.744Z","caller":"traceutil/trace.go:171","msg":"trace[178708182] transaction","detail":"{read_only:false; response_revision:495; number_of_response:1; }","duration":"223.822964ms","start":"2023-05-24T19:29:09.520Z","end":"2023-05-24T19:29:09.744Z","steps":["trace[178708182] 'process raft request'  (duration: 222.858066ms)"],"step_count":1}
	{"level":"info","ts":"2023-05-24T19:29:09.744Z","caller":"traceutil/trace.go:171","msg":"trace[1654541588] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:495; }","duration":"197.241639ms","start":"2023-05-24T19:29:09.546Z","end":"2023-05-24T19:29:09.744Z","steps":["trace[1654541588] 'agreement among raft nodes before linearized reading'  (duration: 196.82804ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T19:29:09.998Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"210.492901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-05-24T19:29:09.998Z","caller":"traceutil/trace.go:171","msg":"trace[1847383280] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:495; }","duration":"210.617701ms","start":"2023-05-24T19:29:09.787Z","end":"2023-05-24T19:29:09.998Z","steps":["trace[1847383280] 'count revisions from in-memory index tree'  (duration: 210.201502ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T19:29:10.839Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"296.777961ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2458270845482180495 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.27.130.107\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/172.27.130.107\" value_size:67 lease:2458270845482180493 >> failure:<request_range:<key:\"/registry/masterleases/172.27.130.107\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-05-24T19:29:10.840Z","caller":"traceutil/trace.go:171","msg":"trace[1103372944] transaction","detail":"{read_only:false; response_revision:497; number_of_response:1; }","duration":"324.369484ms","start":"2023-05-24T19:29:10.516Z","end":"2023-05-24T19:29:10.840Z","steps":["trace[1103372944] 'process raft request'  (duration: 323.726786ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T19:29:10.840Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-05-24T19:29:10.516Z","time spent":"324.437384ms","remote":"127.0.0.1:40030","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":681,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-eozarljd2kd7jv3egglj7w3tsu\" mod_revision:488 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-eozarljd2kd7jv3egglj7w3tsu\" value_size:608 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-eozarljd2kd7jv3egglj7w3tsu\" > >"}
	{"level":"info","ts":"2023-05-24T19:29:10.841Z","caller":"traceutil/trace.go:171","msg":"trace[1843630118] transaction","detail":"{read_only:false; response_revision:496; number_of_response:1; }","duration":"496.385198ms","start":"2023-05-24T19:29:10.344Z","end":"2023-05-24T19:29:10.841Z","steps":["trace[1843630118] 'process raft request'  (duration: 197.718942ms)","trace[1843630118] 'compare'  (duration: 296.671361ms)"],"step_count":2}
	{"level":"warn","ts":"2023-05-24T19:29:10.841Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-05-24T19:29:10.344Z","time spent":"496.426798ms","remote":"127.0.0.1:39980","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.130.107\" mod_revision:489 > success:<request_put:<key:\"/registry/masterleases/172.27.130.107\" value_size:67 lease:2458270845482180493 >> failure:<request_range:<key:\"/registry/masterleases/172.27.130.107\" > >"}
	{"level":"warn","ts":"2023-05-24T19:29:31.528Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.481894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-05-24T19:29:31.529Z","caller":"traceutil/trace.go:171","msg":"trace[44021825] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:554; }","duration":"207.659005ms","start":"2023-05-24T19:29:31.321Z","end":"2023-05-24T19:29:31.529Z","steps":["trace[44021825] 'range keys from in-memory index tree'  (duration: 206.390992ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T19:29:31.530Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.890714ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-237000-m02\" ","response":"range_response_count:1 size:3898"}
	{"level":"info","ts":"2023-05-24T19:29:31.530Z","caller":"traceutil/trace.go:171","msg":"trace[663442992] range","detail":"{range_begin:/registry/minions/multinode-237000-m02; range_end:; response_count:1; response_revision:554; }","duration":"270.946415ms","start":"2023-05-24T19:29:31.259Z","end":"2023-05-24T19:29:31.530Z","steps":["trace[663442992] 'range keys from in-memory index tree'  (duration: 270.706813ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  19:30:30 up 4 min,  0 users,  load average: 1.20, 0.72, 0.31
	Linux multinode-237000 5.10.57 #1 SMP Sat May 20 03:22:25 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [a5f82b77134c] <==
	* I0524 19:29:25.217234       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.27.128.127 Flags: [] Table: 0} 
	I0524 19:29:35.225368       1 main.go:223] Handling node with IPs: map[172.27.130.107:{}]
	I0524 19:29:35.225456       1 main.go:227] handling current node
	I0524 19:29:35.225573       1 main.go:223] Handling node with IPs: map[172.27.128.127:{}]
	I0524 19:29:35.225588       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:29:45.240950       1 main.go:223] Handling node with IPs: map[172.27.130.107:{}]
	I0524 19:29:45.241093       1 main.go:227] handling current node
	I0524 19:29:45.241121       1 main.go:223] Handling node with IPs: map[172.27.128.127:{}]
	I0524 19:29:45.241130       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:29:55.247873       1 main.go:223] Handling node with IPs: map[172.27.130.107:{}]
	I0524 19:29:55.247982       1 main.go:227] handling current node
	I0524 19:29:55.247998       1 main.go:223] Handling node with IPs: map[172.27.128.127:{}]
	I0524 19:29:55.248006       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:30:05.254703       1 main.go:223] Handling node with IPs: map[172.27.130.107:{}]
	I0524 19:30:05.254746       1 main.go:227] handling current node
	I0524 19:30:05.254758       1 main.go:223] Handling node with IPs: map[172.27.128.127:{}]
	I0524 19:30:05.254764       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:30:15.269506       1 main.go:223] Handling node with IPs: map[172.27.130.107:{}]
	I0524 19:30:15.269604       1 main.go:227] handling current node
	I0524 19:30:15.269619       1 main.go:223] Handling node with IPs: map[172.27.128.127:{}]
	I0524 19:30:15.269627       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:30:25.278292       1 main.go:223] Handling node with IPs: map[172.27.130.107:{}]
	I0524 19:30:25.278395       1 main.go:227] handling current node
	I0524 19:30:25.278446       1 main.go:223] Handling node with IPs: map[172.27.128.127:{}]
	I0524 19:30:25.278456       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [30b43ae6055b] <==
	* I0524 19:27:07.913289       1 shared_informer.go:318] Caches are synced for configmaps
	I0524 19:27:07.921091       1 controller.go:624] quota admission added evaluator for: namespaces
	E0524 19:27:08.002881       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0524 19:27:08.006128       1 controller.go:150] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0524 19:27:08.241017       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0524 19:27:08.381022       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0524 19:27:08.784157       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0524 19:27:08.792129       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0524 19:27:08.792164       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0524 19:27:09.930242       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0524 19:27:10.031522       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0524 19:27:10.226989       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0524 19:27:10.239745       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.27.130.107]
	I0524 19:27:10.241798       1 controller.go:624] quota admission added evaluator for: endpoints
	I0524 19:27:10.251365       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0524 19:27:10.842826       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0524 19:27:11.905956       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0524 19:27:11.943900       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0524 19:27:11.965206       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0524 19:27:24.145485       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0524 19:27:24.506855       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0524 19:29:10.842191       1 trace.go:219] Trace[467548135]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.130.107,type:*v1.Endpoints,resource:apiServerIPInfo (24-May-2023 19:29:10.212) (total time: 629ms):
	Trace[467548135]: ---"Transaction prepared" 131ms (19:29:10.344)
	Trace[467548135]: ---"Txn call completed" 497ms (19:29:10.842)
	Trace[467548135]: [629.93052ms] [629.93052ms] END
	
	* 
	* ==> kube-controller-manager [c29b9004260c] <==
	* I0524 19:27:23.840701       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0524 19:27:23.841993       1 shared_informer.go:318] Caches are synced for persistent volume
	I0524 19:27:23.842117       1 shared_informer.go:318] Caches are synced for GC
	I0524 19:27:23.847026       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-237000" podCIDRs=[10.244.0.0/24]
	I0524 19:27:24.158695       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0524 19:27:24.257296       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 19:27:24.300586       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 19:27:24.300701       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0524 19:27:24.555879       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xgkpb"
	I0524 19:27:24.555907       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r6f94"
	I0524 19:27:24.643491       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0524 19:27:24.734134       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-6qkh2"
	I0524 19:27:24.754373       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-qhx48"
	I0524 19:27:24.860008       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-6qkh2"
	I0524 19:27:38.775056       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0524 19:29:21.996892       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-237000-m02\" does not exist"
	I0524 19:29:22.023970       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-237000-m02" podCIDRs=[10.244.1.0/24]
	I0524 19:29:22.049142       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9g7mc"
	I0524 19:29:22.049857       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zglzj"
	I0524 19:29:23.803345       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-237000-m02"
	I0524 19:29:23.803692       1 event.go:307] "Event occurred" object="multinode-237000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-237000-m02 event: Registered Node multinode-237000-m02 in Controller"
	W0524 19:29:36.888965       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	I0524 19:29:49.544209       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0524 19:29:49.575234       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-tdzj2"
	I0524 19:29:49.598757       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-9t5bp"
	
	* 
	* ==> kube-proxy [9e3f0057f97c] <==
	* I0524 19:27:26.148168       1 node.go:141] Successfully retrieved node IP: 172.27.130.107
	I0524 19:27:26.148365       1 server_others.go:110] "Detected node IP" address="172.27.130.107"
	I0524 19:27:26.149015       1 server_others.go:551] "Using iptables proxy"
	I0524 19:27:26.268276       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 19:27:26.268378       1 server_others.go:190] "Using iptables Proxier"
	I0524 19:27:26.268460       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 19:27:26.269189       1 server.go:657] "Version info" version="v1.27.2"
	I0524 19:27:26.269337       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 19:27:26.270965       1 config.go:188] "Starting service config controller"
	I0524 19:27:26.271009       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 19:27:26.271033       1 config.go:97] "Starting endpoint slice config controller"
	I0524 19:27:26.271041       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 19:27:26.273992       1 config.go:315] "Starting node config controller"
	I0524 19:27:26.274190       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 19:27:26.372182       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0524 19:27:26.372199       1 shared_informer.go:318] Caches are synced for service config
	I0524 19:27:26.374992       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [bde0fe1b2458] <==
	* W0524 19:27:08.952444       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0524 19:27:08.952662       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0524 19:27:09.075088       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 19:27:09.075925       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0524 19:27:09.083618       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0524 19:27:09.083659       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0524 19:27:09.104598       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 19:27:09.104788       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 19:27:09.185324       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0524 19:27:09.185376       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0524 19:27:09.196851       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0524 19:27:09.196878       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0524 19:27:09.237811       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0524 19:27:09.238154       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0524 19:27:09.315037       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 19:27:09.315578       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0524 19:27:09.336810       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 19:27:09.336941       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 19:27:09.408709       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0524 19:27:09.408756       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0524 19:27:09.459521       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 19:27:09.459635       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 19:27:09.569006       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0524 19:27:09.569231       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0524 19:27:10.543594       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 19:25:57 UTC, ends at Wed 2023-05-24 19:30:30 UTC. --
	May 24 19:27:38 multinode-237000 kubelet[2651]: I0524 19:27:38.622708    2651 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck956\" (UniqueName: \"kubernetes.io/projected/6498131a-f2e2-4098-9a5f-6c277fae3747-kube-api-access-ck956\") pod \"storage-provisioner\" (UID: \"6498131a-f2e2-4098-9a5f-6c277fae3747\") " pod="kube-system/storage-provisioner"
	May 24 19:27:38 multinode-237000 kubelet[2651]: I0524 19:27:38.622847    2651 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12d04c63-9898-4ccf-9e6d-92d8f3d086a4-config-volume\") pod \"coredns-5d78c9869d-qhx48\" (UID: \"12d04c63-9898-4ccf-9e6d-92d8f3d086a4\") " pod="kube-system/coredns-5d78c9869d-qhx48"
	May 24 19:27:38 multinode-237000 kubelet[2651]: I0524 19:27:38.622884    2651 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/6498131a-f2e2-4098-9a5f-6c277fae3747-tmp\") pod \"storage-provisioner\" (UID: \"6498131a-f2e2-4098-9a5f-6c277fae3747\") " pod="kube-system/storage-provisioner"
	May 24 19:27:38 multinode-237000 kubelet[2651]: I0524 19:27:38.622914    2651 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mqp2\" (UniqueName: \"kubernetes.io/projected/12d04c63-9898-4ccf-9e6d-92d8f3d086a4-kube-api-access-7mqp2\") pod \"coredns-5d78c9869d-qhx48\" (UID: \"12d04c63-9898-4ccf-9e6d-92d8f3d086a4\") " pod="kube-system/coredns-5d78c9869d-qhx48"
	May 24 19:27:39 multinode-237000 kubelet[2651]: E0524 19:27:39.724322    2651 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition
	May 24 19:27:39 multinode-237000 kubelet[2651]: E0524 19:27:39.724549    2651 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/12d04c63-9898-4ccf-9e6d-92d8f3d086a4-config-volume podName:12d04c63-9898-4ccf-9e6d-92d8f3d086a4 nodeName:}" failed. No retries permitted until 2023-05-24 19:27:40.224457858 +0000 UTC m=+28.396089699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/12d04c63-9898-4ccf-9e6d-92d8f3d086a4-config-volume") pod "coredns-5d78c9869d-qhx48" (UID: "12d04c63-9898-4ccf-9e6d-92d8f3d086a4") : failed to sync configmap cache: timed out waiting for the condition
	May 24 19:27:40 multinode-237000 kubelet[2651]: I0524 19:27:40.598562    2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.598516766 podCreationTimestamp="2023-05-24 19:27:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 19:27:40.598054071 +0000 UTC m=+28.769686012" watchObservedRunningTime="2023-05-24 19:27:40.598516766 +0000 UTC m=+28.770148707"
	May 24 19:27:41 multinode-237000 kubelet[2651]: I0524 19:27:41.120874    2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7975ebab5fd50f72baed98d4f8516871ca5e3312e011c102c0b0b05fe4899c4f"
	May 24 19:27:42 multinode-237000 kubelet[2651]: I0524 19:27:42.175580    2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-qhx48" podStartSLOduration=18.175540587 podCreationTimestamp="2023-05-24 19:27:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 19:27:42.175172291 +0000 UTC m=+30.346804232" watchObservedRunningTime="2023-05-24 19:27:42.175540587 +0000 UTC m=+30.347172428"
	May 24 19:28:12 multinode-237000 kubelet[2651]: E0524 19:28:12.325560    2651 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:28:12 multinode-237000 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:28:12 multinode-237000 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:28:12 multinode-237000 kubelet[2651]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:29:12 multinode-237000 kubelet[2651]: E0524 19:29:12.330979    2651 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:29:12 multinode-237000 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:29:12 multinode-237000 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:29:12 multinode-237000 kubelet[2651]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:29:49 multinode-237000 kubelet[2651]: I0524 19:29:49.630462    2651 topology_manager.go:212] "Topology Admit Handler"
	May 24 19:29:49 multinode-237000 kubelet[2651]: I0524 19:29:49.735832    2651 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x95n4\" (UniqueName: \"kubernetes.io/projected/57289db9-2a89-4cb2-b073-88d539b07054-kube-api-access-x95n4\") pod \"busybox-67b7f59bb-9t5bp\" (UID: \"57289db9-2a89-4cb2-b073-88d539b07054\") " pod="default/busybox-67b7f59bb-9t5bp"
	May 24 19:29:50 multinode-237000 kubelet[2651]: I0524 19:29:50.895664    2651 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44b4e40976026b04324bb77bb9f5c5b4c435c1d35fcac7028f5f4bfed8ca071f"
	May 24 19:29:52 multinode-237000 kubelet[2651]: I0524 19:29:52.978058    2651 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-9t5bp" podStartSLOduration=3.089166834 podCreationTimestamp="2023-05-24 19:29:49 +0000 UTC" firstStartedPulling="2023-05-24 19:29:50.947029875 +0000 UTC m=+159.118661716" lastFinishedPulling="2023-05-24 19:29:51.835877433 +0000 UTC m=+160.007509274" observedRunningTime="2023-05-24 19:29:52.974887071 +0000 UTC m=+161.146518912" watchObservedRunningTime="2023-05-24 19:29:52.978014392 +0000 UTC m=+161.149646333"
	May 24 19:30:12 multinode-237000 kubelet[2651]: E0524 19:30:12.326949    2651 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:30:12 multinode-237000 kubelet[2651]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:30:12 multinode-237000 kubelet[2651]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:30:12 multinode-237000 kubelet[2651]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
E0524 19:30:31.365033    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-237000 -n multinode-237000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-237000 -n multinode-237000: (5.0755673s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-237000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (39.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (358.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-237000
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-237000
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-237000: (1m0.6728256s)
multinode_test.go:295: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-237000 --wait=true -v=8 --alsologtostderr
E0524 19:39:52.184774    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 19:40:08.990890    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 19:40:19.891844    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
E0524 19:40:31.366282    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:42:16.691140    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
multinode_test.go:295: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-237000 --wait=true -v=8 --alsologtostderr: (4m38.5632252s)
multinode_test.go:300: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-237000
multinode_test.go:307: reported node list is not the same after restart. Before restart: multinode-237000	172.27.130.107
multinode-237000-m02	172.27.128.127
multinode-237000-m03	172.27.134.200

                                                
                                                
After restart: multinode-237000	172.27.143.236
multinode-237000-m02	172.27.142.80
multinode-237000-m03	172.27.137.67
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-237000 -n multinode-237000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-237000 -n multinode-237000: (5.2044153s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 logs -n 25: (5.3222652s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-237000 ssh -n                                                                                                  | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:33 UTC | 24 May 23 19:34 UTC |
	|         | multinode-237000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-237000 cp multinode-237000-m02:/home/docker/cp-test.txt                                                        | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1001856804\001\cp-test_multinode-237000-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n                                                                                                  | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | multinode-237000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-237000 cp multinode-237000-m02:/home/docker/cp-test.txt                                                        | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | multinode-237000:/home/docker/cp-test_multinode-237000-m02_multinode-237000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n                                                                                                  | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | multinode-237000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n multinode-237000 sudo cat                                                                        | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | /home/docker/cp-test_multinode-237000-m02_multinode-237000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-237000 cp multinode-237000-m02:/home/docker/cp-test.txt                                                        | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | multinode-237000-m03:/home/docker/cp-test_multinode-237000-m02_multinode-237000-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n                                                                                                  | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | multinode-237000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n multinode-237000-m03 sudo cat                                                                    | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | /home/docker/cp-test_multinode-237000-m02_multinode-237000-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-237000 cp testdata\cp-test.txt                                                                                 | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | multinode-237000-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n                                                                                                  | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | multinode-237000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-237000 cp multinode-237000-m03:/home/docker/cp-test.txt                                                        | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1001856804\001\cp-test_multinode-237000-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n                                                                                                  | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | multinode-237000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-237000 cp multinode-237000-m03:/home/docker/cp-test.txt                                                        | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:34 UTC | 24 May 23 19:34 UTC |
	|         | multinode-237000:/home/docker/cp-test_multinode-237000-m03_multinode-237000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n                                                                                                  | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:35 UTC | 24 May 23 19:35 UTC |
	|         | multinode-237000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n multinode-237000 sudo cat                                                                        | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:35 UTC | 24 May 23 19:35 UTC |
	|         | /home/docker/cp-test_multinode-237000-m03_multinode-237000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-237000 cp multinode-237000-m03:/home/docker/cp-test.txt                                                        | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:35 UTC | 24 May 23 19:35 UTC |
	|         | multinode-237000-m02:/home/docker/cp-test_multinode-237000-m03_multinode-237000-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n                                                                                                  | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:35 UTC | 24 May 23 19:35 UTC |
	|         | multinode-237000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-237000 ssh -n multinode-237000-m02 sudo cat                                                                    | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:35 UTC | 24 May 23 19:35 UTC |
	|         | /home/docker/cp-test_multinode-237000-m03_multinode-237000-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-237000 node stop m03                                                                                           | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:35 UTC | 24 May 23 19:35 UTC |
	| node    | multinode-237000 node start                                                                                              | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:35 UTC | 24 May 23 19:37 UTC |
	|         | m03 --alsologtostderr                                                                                                    |                  |                   |         |                     |                     |
	| node    | list -p multinode-237000                                                                                                 | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:37 UTC |                     |
	| stop    | -p multinode-237000                                                                                                      | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:37 UTC | 24 May 23 19:38 UTC |
	| start   | -p multinode-237000                                                                                                      | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:38 UTC | 24 May 23 19:43 UTC |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-237000                                                                                                 | multinode-237000 | minikube1\jenkins | v1.30.1 | 24 May 23 19:43 UTC |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 19:38:31
	Running on machine: minikube1
	Binary: Built with gc go1.20.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 19:38:31.130169    2140 out.go:296] Setting OutFile to fd 1008 ...
	I0524 19:38:31.193702    2140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:38:31.193702    2140 out.go:309] Setting ErrFile to fd 572...
	I0524 19:38:31.193702    2140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:38:31.216219    2140 out.go:303] Setting JSON to false
	I0524 19:38:31.219873    2140 start.go:125] hostinfo: {"hostname":"minikube1","uptime":5624,"bootTime":1684951486,"procs":148,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 19:38:31.220034    2140 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 19:38:31.223365    2140 out.go:177] * [multinode-237000] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 19:38:31.226970    2140 notify.go:220] Checking for updates...
	I0524 19:38:31.229074    2140 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:38:31.230525    2140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 19:38:31.233786    2140 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 19:38:31.236945    2140 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 19:38:31.239520    2140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 19:38:31.241840    2140 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:38:31.242624    2140 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 19:38:32.925256    2140 out.go:177] * Using the hyperv driver based on existing profile
	I0524 19:38:32.928700    2140 start.go:295] selected driver: hyperv
	I0524 19:38:32.928700    2140 start.go:870] validating driver "hyperv" against &{Name:multinode-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.130.107 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.128.127 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.134.200 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false i
naccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:38:32.928700    2140 start.go:881] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 19:38:32.978229    2140 start_flags.go:915] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0524 19:38:32.978332    2140 cni.go:84] Creating CNI manager for ""
	I0524 19:38:32.978332    2140 cni.go:136] 3 nodes found, recommending kindnet
	I0524 19:38:32.978332    2140 start_flags.go:319] config:
	{Name:multinode-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-237000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.130.107 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.128.127 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.134.200 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:38:32.978583    2140 iso.go:125] acquiring lock: {Name:mk3b29db369ab0f922ac5eeb788beee87e18ec94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 19:38:32.985389    2140 out.go:177] * Starting control plane node multinode-237000 in cluster multinode-237000
	I0524 19:38:32.987815    2140 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:38:32.987815    2140 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0524 19:38:32.987815    2140 cache.go:57] Caching tarball of preloaded images
	I0524 19:38:32.988337    2140 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0524 19:38:32.988502    2140 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 19:38:32.988502    2140 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:38:32.990815    2140 cache.go:195] Successfully downloaded all kic artifacts
	I0524 19:38:32.990815    2140 start.go:364] acquiring machines lock for multinode-237000: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 19:38:32.990815    2140 start.go:368] acquired machines lock for "multinode-237000" in 0s
	I0524 19:38:32.990815    2140 start.go:96] Skipping create...Using existing machine configuration
	I0524 19:38:32.990815    2140 fix.go:55] fixHost starting: 
	I0524 19:38:32.991798    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:38:33.727434    2140 main.go:141] libmachine: [stdout =====>] : Off
	
	I0524 19:38:33.727434    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:33.727567    2140 fix.go:103] recreateIfNeeded on multinode-237000: state=Stopped err=<nil>
	W0524 19:38:33.727567    2140 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 19:38:33.732718    2140 out.go:177] * Restarting existing hyperv VM for "multinode-237000" ...
	I0524 19:38:33.735370    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-237000
	I0524 19:38:35.385769    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:38:35.385769    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:35.386115    2140 main.go:141] libmachine: Waiting for host to start...
	I0524 19:38:35.386161    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:38:36.112436    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:38:36.112826    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:36.112887    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:38:37.173810    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:38:37.173874    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:38.186587    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:38:38.945783    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:38:38.945815    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:38.945815    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:38:40.011188    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:38:40.011225    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:41.025284    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:38:41.784410    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:38:41.784410    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:41.784684    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:38:42.847188    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:38:42.847538    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:43.850097    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:38:44.594912    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:38:44.594912    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:44.594912    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:38:45.646066    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:38:45.646066    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:46.660955    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:38:47.404702    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:38:47.404927    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:47.405131    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:38:48.422192    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:38:48.422299    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:49.423886    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:38:50.187265    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:38:50.187505    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:50.187505    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:38:51.215195    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:38:51.215386    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:52.228993    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:38:52.997848    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:38:52.997848    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:52.997938    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:38:54.066905    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:38:54.066905    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:55.070032    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:38:55.849046    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:38:55.849046    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:55.849161    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:38:56.896232    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:38:56.896393    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:57.899614    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:38:58.634949    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:38:58.634949    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:38:58.635353    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:38:59.696680    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:38:59.696680    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:00.701928    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:01.512328    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:01.512328    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:01.512554    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:02.651428    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:02.651428    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:02.654357    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:03.411064    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:03.411064    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:03.411064    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:04.500002    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:04.500002    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:04.500002    2140 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:39:04.502563    2140 machine.go:88] provisioning docker machine ...
	I0524 19:39:04.502563    2140 buildroot.go:166] provisioning hostname "multinode-237000"
	I0524 19:39:04.502563    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:05.258771    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:05.258980    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:05.258980    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:06.382545    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:06.382649    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:06.389833    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:39:06.390653    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.143.236 22 <nil> <nil>}
	I0524 19:39:06.390653    2140 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-237000 && echo "multinode-237000" | sudo tee /etc/hostname
	I0524 19:39:06.552544    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-237000
	
	I0524 19:39:06.552544    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:07.311033    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:07.311033    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:07.311160    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:08.396278    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:08.396278    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:08.400495    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:39:08.401741    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.143.236 22 <nil> <nil>}
	I0524 19:39:08.401741    2140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-237000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-237000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-237000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 19:39:08.556953    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 19:39:08.556953    2140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0524 19:39:08.556953    2140 buildroot.go:174] setting up certificates
	I0524 19:39:08.556953    2140 provision.go:83] configureAuth start
	I0524 19:39:08.556953    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:09.313321    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:09.313504    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:09.313504    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:10.368487    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:10.368487    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:10.368487    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:11.125769    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:11.125769    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:11.125917    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:12.262441    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:12.262698    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:12.262698    2140 provision.go:138] copyHostCerts
	I0524 19:39:12.262698    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0524 19:39:12.262698    2140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 19:39:12.262698    2140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 19:39:12.262698    2140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 19:39:12.262698    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0524 19:39:12.262698    2140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 19:39:12.262698    2140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 19:39:12.262698    2140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 19:39:12.262698    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0524 19:39:12.262698    2140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 19:39:12.262698    2140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 19:39:12.262698    2140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 19:39:12.262698    2140 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-237000 san=[172.27.143.236 172.27.143.236 localhost 127.0.0.1 minikube multinode-237000]
	I0524 19:39:12.542656    2140 provision.go:172] copyRemoteCerts
	I0524 19:39:12.552889    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 19:39:12.552995    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:13.313753    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:13.313753    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:13.313753    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:14.399778    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:14.400029    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:14.400029    2140 sshutil.go:53] new ssh client: &{IP:172.27.143.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:39:14.512603    2140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.9597149s)
	I0524 19:39:14.512890    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0524 19:39:14.513320    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 19:39:14.555222    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0524 19:39:14.555621    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0524 19:39:14.595498    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0524 19:39:14.595567    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0524 19:39:14.637229    2140 provision.go:86] duration metric: configureAuth took 6.0802781s
	I0524 19:39:14.637287    2140 buildroot.go:189] setting minikube options for container-runtime
	I0524 19:39:14.637978    2140 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:39:14.638089    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:15.394344    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:15.394576    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:15.394673    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:16.497348    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:16.497348    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:16.501528    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:39:16.502870    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.143.236 22 <nil> <nil>}
	I0524 19:39:16.502870    2140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 19:39:16.646489    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 19:39:16.646563    2140 buildroot.go:70] root file system type: tmpfs
	I0524 19:39:16.646893    2140 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 19:39:16.646893    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:17.413685    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:17.413685    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:17.413685    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:18.486632    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:18.486632    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:18.490498    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:39:18.491774    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.143.236 22 <nil> <nil>}
	I0524 19:39:18.491774    2140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 19:39:18.654889    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 19:39:18.654889    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:19.430792    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:19.430792    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:19.430792    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:20.515567    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:20.515839    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:20.520223    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:39:20.520901    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.143.236 22 <nil> <nil>}
	I0524 19:39:20.520901    2140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 19:39:21.994549    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 19:39:21.994549    2140 machine.go:91] provisioned docker machine in 17.4919929s
	I0524 19:39:21.994549    2140 start.go:300] post-start starting for "multinode-237000" (driver="hyperv")
	I0524 19:39:21.994549    2140 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 19:39:22.005611    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 19:39:22.006167    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:22.755736    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:22.755819    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:22.755890    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:23.846616    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:23.846616    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:23.847187    2140 sshutil.go:53] new ssh client: &{IP:172.27.143.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:39:23.960343    2140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.9547325s)
	I0524 19:39:23.971208    2140 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 19:39:23.976732    2140 command_runner.go:130] > NAME=Buildroot
	I0524 19:39:23.976852    2140 command_runner.go:130] > VERSION=2021.02.12-1-g419828a-dirty
	I0524 19:39:23.976852    2140 command_runner.go:130] > ID=buildroot
	I0524 19:39:23.976852    2140 command_runner.go:130] > VERSION_ID=2021.02.12
	I0524 19:39:23.976852    2140 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0524 19:39:23.976958    2140 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 19:39:23.976958    2140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 19:39:23.977380    2140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 19:39:23.978286    2140 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 19:39:23.978346    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> /etc/ssl/certs/65602.pem
	I0524 19:39:23.991157    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 19:39:24.017458    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 19:39:24.066403    2140 start.go:303] post-start completed in 2.0718553s
	I0524 19:39:24.066403    2140 fix.go:57] fixHost completed within 51.0756085s
	I0524 19:39:24.066403    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:24.840288    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:24.840288    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:24.840288    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:25.948346    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:25.948346    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:25.955623    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:39:25.956629    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.143.236 22 <nil> <nil>}
	I0524 19:39:25.956629    2140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 19:39:26.097627    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684957166.096023100
	
	I0524 19:39:26.097627    2140 fix.go:207] guest clock: 1684957166.096023100
	I0524 19:39:26.097627    2140 fix.go:220] Guest: 2023-05-24 19:39:26.0960231 +0000 UTC Remote: 2023-05-24 19:39:24.0664036 +0000 UTC m=+53.024549201 (delta=2.0296195s)
	I0524 19:39:26.097768    2140 fix.go:191] guest clock delta is within tolerance: 2.0296195s
	I0524 19:39:26.097768    2140 start.go:83] releasing machines lock for "multinode-237000", held for 53.1069737s
	I0524 19:39:26.098008    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:26.829702    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:26.829702    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:26.829702    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:27.947919    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:27.948196    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:27.952542    2140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 19:39:27.952713    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:27.960970    2140 ssh_runner.go:195] Run: cat /version.json
	I0524 19:39:27.961530    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:39:28.741913    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:28.741913    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:28.742118    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:28.746833    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:39:28.746833    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:28.746833    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:39:29.901294    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:29.901294    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:29.901981    2140 sshutil.go:53] new ssh client: &{IP:172.27.143.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:39:29.924541    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:39:29.924541    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:39:29.924642    2140 sshutil.go:53] new ssh client: &{IP:172.27.143.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:39:30.000528    2140 command_runner.go:130] > {"iso_version": "v1.30.1-1684536668-16501", "kicbase_version": "v0.0.39-1684523789-16533", "minikube_version": "v1.30.1", "commit": "4302bbdfbbd8ec304b126be6025f52f2ccb3add9"}
	I0524 19:39:30.000621    2140 ssh_runner.go:235] Completed: cat /version.json: (2.0396515s)
	I0524 19:39:30.011706    2140 ssh_runner.go:195] Run: systemctl --version
	I0524 19:39:31.923935    2140 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0524 19:39:31.923935    2140 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (3.9713947s)
	I0524 19:39:31.923935    2140 command_runner.go:130] > systemd 247 (247)
	I0524 19:39:31.923935    2140 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0524 19:39:31.923935    2140 ssh_runner.go:235] Completed: systemctl --version: (1.9122302s)
	I0524 19:39:31.933822    2140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0524 19:39:31.941295    2140 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0524 19:39:31.942147    2140 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 19:39:31.953154    2140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 19:39:31.977793    2140 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0524 19:39:31.977793    2140 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 19:39:31.977793    2140 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:39:31.985245    2140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 19:39:32.024744    2140 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0524 19:39:32.024839    2140 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0524 19:39:32.024839    2140 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0524 19:39:32.024839    2140 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0524 19:39:32.024839    2140 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0524 19:39:32.024839    2140 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0524 19:39:32.024839    2140 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0524 19:39:32.024839    2140 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0524 19:39:32.024839    2140 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:39:32.024839    2140 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0524 19:39:32.024839    2140 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0524 19:39:32.024839    2140 docker.go:563] Images already preloaded, skipping extraction
	I0524 19:39:32.024839    2140 start.go:481] detecting cgroup driver to use...
	I0524 19:39:32.025591    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:39:32.055615    2140 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0524 19:39:32.066402    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 19:39:32.096174    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 19:39:32.117812    2140 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 19:39:32.128685    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 19:39:32.160787    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:39:32.186840    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 19:39:32.215202    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:39:32.242491    2140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 19:39:32.268687    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 19:39:32.296875    2140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 19:39:32.312397    2140 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0524 19:39:32.323786    2140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 19:39:32.348622    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:39:32.529472    2140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 19:39:32.559594    2140 start.go:481] detecting cgroup driver to use...
	I0524 19:39:32.570220    2140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 19:39:32.594491    2140 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0524 19:39:32.594570    2140 command_runner.go:130] > [Unit]
	I0524 19:39:32.594570    2140 command_runner.go:130] > Description=Docker Application Container Engine
	I0524 19:39:32.594570    2140 command_runner.go:130] > Documentation=https://docs.docker.com
	I0524 19:39:32.594570    2140 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0524 19:39:32.594570    2140 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0524 19:39:32.594662    2140 command_runner.go:130] > StartLimitBurst=3
	I0524 19:39:32.594662    2140 command_runner.go:130] > StartLimitIntervalSec=60
	I0524 19:39:32.594662    2140 command_runner.go:130] > [Service]
	I0524 19:39:32.594662    2140 command_runner.go:130] > Type=notify
	I0524 19:39:32.594662    2140 command_runner.go:130] > Restart=on-failure
	I0524 19:39:32.594662    2140 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0524 19:39:32.594662    2140 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0524 19:39:32.594662    2140 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0524 19:39:32.594739    2140 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0524 19:39:32.594739    2140 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0524 19:39:32.594739    2140 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0524 19:39:32.594739    2140 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0524 19:39:32.594739    2140 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0524 19:39:32.594739    2140 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0524 19:39:32.594824    2140 command_runner.go:130] > ExecStart=
	I0524 19:39:32.594824    2140 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0524 19:39:32.594824    2140 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0524 19:39:32.594883    2140 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0524 19:39:32.594883    2140 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0524 19:39:32.594883    2140 command_runner.go:130] > LimitNOFILE=infinity
	I0524 19:39:32.594883    2140 command_runner.go:130] > LimitNPROC=infinity
	I0524 19:39:32.594959    2140 command_runner.go:130] > LimitCORE=infinity
	I0524 19:39:32.594959    2140 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0524 19:39:32.594959    2140 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0524 19:39:32.594959    2140 command_runner.go:130] > TasksMax=infinity
	I0524 19:39:32.594959    2140 command_runner.go:130] > TimeoutStartSec=0
	I0524 19:39:32.594959    2140 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0524 19:39:32.595033    2140 command_runner.go:130] > Delegate=yes
	I0524 19:39:32.595033    2140 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0524 19:39:32.595033    2140 command_runner.go:130] > KillMode=process
	I0524 19:39:32.595033    2140 command_runner.go:130] > [Install]
	I0524 19:39:32.595083    2140 command_runner.go:130] > WantedBy=multi-user.target
	I0524 19:39:32.605560    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 19:39:32.635873    2140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 19:39:32.673204    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 19:39:32.706753    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:39:32.741297    2140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 19:39:32.797600    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:39:32.819773    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:39:32.850517    2140 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0524 19:39:32.861181    2140 ssh_runner.go:195] Run: which cri-dockerd
	I0524 19:39:32.866130    2140 command_runner.go:130] > /usr/bin/cri-dockerd
	I0524 19:39:32.876164    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 19:39:32.892308    2140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 19:39:32.941794    2140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 19:39:33.129079    2140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 19:39:33.292633    2140 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 19:39:33.292724    2140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 19:39:33.331435    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:39:33.523049    2140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 19:39:35.244285    2140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.7212365s)
	I0524 19:39:35.253779    2140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 19:39:35.452840    2140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 19:39:35.633807    2140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 19:39:35.806179    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:39:35.988948    2140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 19:39:36.028927    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:39:36.206146    2140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 19:39:36.310424    2140 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 19:39:36.326470    2140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 19:39:36.339797    2140 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0524 19:39:36.339797    2140 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0524 19:39:36.339797    2140 command_runner.go:130] > Device: 16h/22d	Inode: 896         Links: 1
	I0524 19:39:36.339797    2140 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0524 19:39:36.339797    2140 command_runner.go:130] > Access: 2023-05-24 19:39:36.222496642 +0000
	I0524 19:39:36.339797    2140 command_runner.go:130] > Modify: 2023-05-24 19:39:36.222496642 +0000
	I0524 19:39:36.339797    2140 command_runner.go:130] > Change: 2023-05-24 19:39:36.226494995 +0000
	I0524 19:39:36.339797    2140 command_runner.go:130] >  Birth: -
	I0524 19:39:36.339797    2140 start.go:549] Will wait 60s for crictl version
	I0524 19:39:36.349366    2140 ssh_runner.go:195] Run: which crictl
	I0524 19:39:36.356592    2140 command_runner.go:130] > /usr/bin/crictl
	I0524 19:39:36.366855    2140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 19:39:36.430011    2140 command_runner.go:130] > Version:  0.1.0
	I0524 19:39:36.430011    2140 command_runner.go:130] > RuntimeName:  docker
	I0524 19:39:36.430011    2140 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0524 19:39:36.430011    2140 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0524 19:39:36.430011    2140 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 19:39:36.442786    2140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 19:39:36.494904    2140 command_runner.go:130] > 20.10.23
	I0524 19:39:36.504811    2140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 19:39:36.546404    2140 command_runner.go:130] > 20.10.23
	I0524 19:39:36.549357    2140 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 19:39:36.549357    2140 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 19:39:36.555727    2140 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 19:39:36.555727    2140 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 19:39:36.555727    2140 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 19:39:36.555727    2140 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 19:39:36.559305    2140 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 19:39:36.559564    2140 ip.go:210] interface addr: 172.27.128.1/20
	I0524 19:39:36.569250    2140 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 19:39:36.575075    2140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:39:36.593260    2140 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:39:36.600306    2140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 19:39:36.638206    2140 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0524 19:39:36.639097    2140 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0524 19:39:36.639097    2140 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0524 19:39:36.639097    2140 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0524 19:39:36.639097    2140 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0524 19:39:36.639097    2140 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0524 19:39:36.639097    2140 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0524 19:39:36.639097    2140 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0524 19:39:36.639097    2140 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:39:36.639097    2140 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0524 19:39:36.639097    2140 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0524 19:39:36.639097    2140 docker.go:563] Images already preloaded, skipping extraction
	I0524 19:39:36.645107    2140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 19:39:36.679880    2140 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0524 19:39:36.679880    2140 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0524 19:39:36.679880    2140 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0524 19:39:36.679880    2140 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0524 19:39:36.679965    2140 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0524 19:39:36.679965    2140 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0524 19:39:36.679997    2140 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0524 19:39:36.679997    2140 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0524 19:39:36.679997    2140 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:39:36.679997    2140 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0524 19:39:36.680040    2140 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0524 19:39:36.680086    2140 cache_images.go:84] Images are preloaded, skipping loading
	I0524 19:39:36.687618    2140 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 19:39:36.731348    2140 command_runner.go:130] > cgroupfs
	I0524 19:39:36.731348    2140 cni.go:84] Creating CNI manager for ""
	I0524 19:39:36.731348    2140 cni.go:136] 3 nodes found, recommending kindnet
	I0524 19:39:36.731348    2140 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 19:39:36.732618    2140 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.143.236 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-237000 NodeName:multinode-237000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.143.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.143.236 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 19:39:36.732879    2140 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.143.236
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-237000"
	  kubeletExtraArgs:
	    node-ip: 172.27.143.236
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.143.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 19:39:36.733012    2140 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-237000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.143.236
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 19:39:36.742008    2140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 19:39:36.761200    2140 command_runner.go:130] > kubeadm
	I0524 19:39:36.761271    2140 command_runner.go:130] > kubectl
	I0524 19:39:36.761271    2140 command_runner.go:130] > kubelet
	I0524 19:39:36.761327    2140 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 19:39:36.770108    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 19:39:36.785024    2140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0524 19:39:36.813331    2140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 19:39:36.842916    2140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0524 19:39:36.884724    2140 ssh_runner.go:195] Run: grep 172.27.143.236	control-plane.minikube.internal$ /etc/hosts
	I0524 19:39:36.890559    2140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.143.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:39:36.910009    2140 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000 for IP: 172.27.143.236
	I0524 19:39:36.910048    2140 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:39:36.910705    2140 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 19:39:36.910705    2140 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 19:39:36.911410    2140 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\client.key
	I0524 19:39:36.911965    2140 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key.5e67a443
	I0524 19:39:36.912149    2140 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt.5e67a443 with IP's: [172.27.143.236 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 19:39:37.007427    2140 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt.5e67a443 ...
	I0524 19:39:37.007427    2140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt.5e67a443: {Name:mk1c1776bb44e355b16974213011e65e6be7ae6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:39:37.008660    2140 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key.5e67a443 ...
	I0524 19:39:37.009667    2140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key.5e67a443: {Name:mkd52e1d13871c37ce48c84ab45f2f78ae1e8f21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:39:37.009935    2140 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt.5e67a443 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt
	I0524 19:39:37.019774    2140 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key.5e67a443 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key
	I0524 19:39:37.020783    2140 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.key
	I0524 19:39:37.020783    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0524 19:39:37.021400    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0524 19:39:37.022141    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0524 19:39:37.022530    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0524 19:39:37.022530    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0524 19:39:37.022530    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0524 19:39:37.022530    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0524 19:39:37.023145    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0524 19:39:37.023336    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 19:39:37.023943    2140 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 19:39:37.024155    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 19:39:37.024155    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 19:39:37.024155    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 19:39:37.024824    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 19:39:37.024824    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 19:39:37.025400    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:39:37.025400    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem -> /usr/share/ca-certificates/6560.pem
	I0524 19:39:37.025400    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> /usr/share/ca-certificates/65602.pem
	I0524 19:39:37.026794    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 19:39:37.076997    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 19:39:37.119319    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 19:39:37.158398    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 19:39:37.200900    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 19:39:37.247835    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 19:39:37.289031    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 19:39:37.328606    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 19:39:37.369667    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 19:39:37.417212    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 19:39:37.457143    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 19:39:37.504131    2140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 19:39:37.545147    2140 ssh_runner.go:195] Run: openssl version
	I0524 19:39:37.558071    2140 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0524 19:39:37.567934    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 19:39:37.595623    2140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 19:39:37.601579    2140 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 19:39:37.602216    2140 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 19:39:37.611542    2140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 19:39:37.618783    2140 command_runner.go:130] > 3ec20f2e
	I0524 19:39:37.629393    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 19:39:37.659339    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 19:39:37.689833    2140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:39:37.696576    2140 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:39:37.696576    2140 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:39:37.704624    2140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:39:37.713130    2140 command_runner.go:130] > b5213941
	I0524 19:39:37.722871    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 19:39:37.753954    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 19:39:37.781776    2140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 19:39:37.788694    2140 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 19:39:37.788757    2140 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 19:39:37.798682    2140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 19:39:37.807371    2140 command_runner.go:130] > 51391683
	I0524 19:39:37.818073    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 19:39:37.844260    2140 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 19:39:37.853285    2140 command_runner.go:130] > ca.crt
	I0524 19:39:37.854217    2140 command_runner.go:130] > ca.key
	I0524 19:39:37.854217    2140 command_runner.go:130] > healthcheck-client.crt
	I0524 19:39:37.854251    2140 command_runner.go:130] > healthcheck-client.key
	I0524 19:39:37.854251    2140 command_runner.go:130] > peer.crt
	I0524 19:39:37.854251    2140 command_runner.go:130] > peer.key
	I0524 19:39:37.854251    2140 command_runner.go:130] > server.crt
	I0524 19:39:37.854251    2140 command_runner.go:130] > server.key
	I0524 19:39:37.863252    2140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0524 19:39:37.871406    2140 command_runner.go:130] > Certificate will not expire
	I0524 19:39:37.883975    2140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0524 19:39:37.892573    2140 command_runner.go:130] > Certificate will not expire
	I0524 19:39:37.901702    2140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0524 19:39:37.910989    2140 command_runner.go:130] > Certificate will not expire
	I0524 19:39:37.921825    2140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0524 19:39:37.930310    2140 command_runner.go:130] > Certificate will not expire
	I0524 19:39:37.940070    2140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0524 19:39:37.948420    2140 command_runner.go:130] > Certificate will not expire
	I0524 19:39:37.957773    2140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0524 19:39:37.967388    2140 command_runner.go:130] > Certificate will not expire
	I0524 19:39:37.967617    2140 kubeadm.go:404] StartCluster: {Name:multinode-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.143.236 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.128.127 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.134.200 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false i
ngress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:39:37.975077    2140 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 19:39:38.023229    2140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 19:39:38.042221    2140 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0524 19:39:38.042221    2140 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0524 19:39:38.042221    2140 command_runner.go:130] > /var/lib/minikube/etcd:
	I0524 19:39:38.042221    2140 command_runner.go:130] > member
	I0524 19:39:38.042221    2140 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0524 19:39:38.042221    2140 kubeadm.go:636] restartCluster start
	I0524 19:39:38.050264    2140 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0524 19:39:38.067033    2140 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0524 19:39:38.067876    2140 kubeconfig.go:135] verify returned: extract IP: "multinode-237000" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:39:38.067876    2140 kubeconfig.go:146] "multinode-237000" context is missing from C:\Users\jenkins.minikube1\minikube-integration\kubeconfig - will repair!
	I0524 19:39:38.068624    2140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:39:38.085028    2140 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:39:38.085666    2140 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.143.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000/client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:39:38.087605    2140 cert_rotation.go:137] Starting client certificate rotation controller
	I0524 19:39:38.097125    2140 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0524 19:39:38.114012    2140 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0524 19:39:38.114012    2140 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0524 19:39:38.114012    2140 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0524 19:39:38.114012    2140 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0524 19:39:38.114012    2140 command_runner.go:130] >  kind: InitConfiguration
	I0524 19:39:38.114012    2140 command_runner.go:130] >  localAPIEndpoint:
	I0524 19:39:38.114134    2140 command_runner.go:130] > -  advertiseAddress: 172.27.130.107
	I0524 19:39:38.114134    2140 command_runner.go:130] > +  advertiseAddress: 172.27.143.236
	I0524 19:39:38.114134    2140 command_runner.go:130] >    bindPort: 8443
	I0524 19:39:38.114134    2140 command_runner.go:130] >  bootstrapTokens:
	I0524 19:39:38.114134    2140 command_runner.go:130] >    - groups:
	I0524 19:39:38.114134    2140 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0524 19:39:38.114185    2140 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0524 19:39:38.114185    2140 command_runner.go:130] >    name: "multinode-237000"
	I0524 19:39:38.114185    2140 command_runner.go:130] >    kubeletExtraArgs:
	I0524 19:39:38.114185    2140 command_runner.go:130] > -    node-ip: 172.27.130.107
	I0524 19:39:38.114238    2140 command_runner.go:130] > +    node-ip: 172.27.143.236
	I0524 19:39:38.114238    2140 command_runner.go:130] >    taints: []
	I0524 19:39:38.114238    2140 command_runner.go:130] >  ---
	I0524 19:39:38.114238    2140 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0524 19:39:38.114238    2140 command_runner.go:130] >  kind: ClusterConfiguration
	I0524 19:39:38.114238    2140 command_runner.go:130] >  apiServer:
	I0524 19:39:38.114238    2140 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.27.130.107"]
	I0524 19:39:38.114313    2140 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.27.143.236"]
	I0524 19:39:38.114313    2140 command_runner.go:130] >    extraArgs:
	I0524 19:39:38.114313    2140 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0524 19:39:38.114313    2140 command_runner.go:130] >  controllerManager:
	I0524 19:39:38.114376    2140 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.27.130.107
	+  advertiseAddress: 172.27.143.236
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-237000"
	   kubeletExtraArgs:
	-    node-ip: 172.27.130.107
	+    node-ip: 172.27.143.236
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.27.130.107"]
	+  certSANs: ["127.0.0.1", "localhost", "172.27.143.236"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0524 19:39:38.114467    2140 kubeadm.go:1123] stopping kube-system containers ...
	I0524 19:39:38.122081    2140 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 19:39:38.156439    2140 command_runner.go:130] > 0be0b91d6412
	I0524 19:39:38.156744    2140 command_runner.go:130] > 7975ebab5fd5
	I0524 19:39:38.156744    2140 command_runner.go:130] > 8b4ccab3df53
	I0524 19:39:38.156744    2140 command_runner.go:130] > 8f0c4af1fb5a
	I0524 19:39:38.156744    2140 command_runner.go:130] > a5f82b77134c
	I0524 19:39:38.156744    2140 command_runner.go:130] > 9e3f0057f97c
	I0524 19:39:38.156744    2140 command_runner.go:130] > 0dcfd5ea3653
	I0524 19:39:38.156744    2140 command_runner.go:130] > eca8b08a4576
	I0524 19:39:38.156831    2140 command_runner.go:130] > 7589cfe30be6
	I0524 19:39:38.156831    2140 command_runner.go:130] > bde0fe1b2458
	I0524 19:39:38.156831    2140 command_runner.go:130] > c29b9004260c
	I0524 19:39:38.156831    2140 command_runner.go:130] > 30b43ae6055b
	I0524 19:39:38.156831    2140 command_runner.go:130] > 4d0c225625eb
	I0524 19:39:38.156831    2140 command_runner.go:130] > 0c8db54a682a
	I0524 19:39:38.156831    2140 command_runner.go:130] > a31c29e9f798
	I0524 19:39:38.156831    2140 command_runner.go:130] > 1f6b2c280e52
	I0524 19:39:38.158672    2140 docker.go:459] Stopping containers: [0be0b91d6412 7975ebab5fd5 8b4ccab3df53 8f0c4af1fb5a a5f82b77134c 9e3f0057f97c 0dcfd5ea3653 eca8b08a4576 7589cfe30be6 bde0fe1b2458 c29b9004260c 30b43ae6055b 4d0c225625eb 0c8db54a682a a31c29e9f798 1f6b2c280e52]
	I0524 19:39:38.167571    2140 ssh_runner.go:195] Run: docker stop 0be0b91d6412 7975ebab5fd5 8b4ccab3df53 8f0c4af1fb5a a5f82b77134c 9e3f0057f97c 0dcfd5ea3653 eca8b08a4576 7589cfe30be6 bde0fe1b2458 c29b9004260c 30b43ae6055b 4d0c225625eb 0c8db54a682a a31c29e9f798 1f6b2c280e52
	I0524 19:39:38.207433    2140 command_runner.go:130] > 0be0b91d6412
	I0524 19:39:38.207502    2140 command_runner.go:130] > 7975ebab5fd5
	I0524 19:39:38.207502    2140 command_runner.go:130] > 8b4ccab3df53
	I0524 19:39:38.207502    2140 command_runner.go:130] > 8f0c4af1fb5a
	I0524 19:39:38.207502    2140 command_runner.go:130] > a5f82b77134c
	I0524 19:39:38.207502    2140 command_runner.go:130] > 9e3f0057f97c
	I0524 19:39:38.207569    2140 command_runner.go:130] > 0dcfd5ea3653
	I0524 19:39:38.207569    2140 command_runner.go:130] > eca8b08a4576
	I0524 19:39:38.207569    2140 command_runner.go:130] > 7589cfe30be6
	I0524 19:39:38.207569    2140 command_runner.go:130] > bde0fe1b2458
	I0524 19:39:38.207569    2140 command_runner.go:130] > c29b9004260c
	I0524 19:39:38.207569    2140 command_runner.go:130] > 30b43ae6055b
	I0524 19:39:38.207569    2140 command_runner.go:130] > 4d0c225625eb
	I0524 19:39:38.207569    2140 command_runner.go:130] > 0c8db54a682a
	I0524 19:39:38.207569    2140 command_runner.go:130] > a31c29e9f798
	I0524 19:39:38.207569    2140 command_runner.go:130] > 1f6b2c280e52
	I0524 19:39:38.215696    2140 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0524 19:39:38.258120    2140 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 19:39:38.276601    2140 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0524 19:39:38.276920    2140 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0524 19:39:38.276920    2140 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0524 19:39:38.276920    2140 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 19:39:38.277148    2140 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 19:39:38.287170    2140 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 19:39:38.303484    2140 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0524 19:39:38.303484    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:39:38.744409    2140 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 19:39:38.744500    2140 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0524 19:39:38.744500    2140 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0524 19:39:38.744500    2140 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0524 19:39:38.744500    2140 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0524 19:39:38.744500    2140 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0524 19:39:38.744500    2140 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0524 19:39:38.744500    2140 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0524 19:39:38.744625    2140 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0524 19:39:38.744625    2140 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0524 19:39:38.744625    2140 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0524 19:39:38.744625    2140 command_runner.go:130] > [certs] Using the existing "sa" key
	I0524 19:39:38.744625    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:39:39.830608    2140 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 19:39:39.831287    2140 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 19:39:39.831287    2140 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 19:39:39.831287    2140 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 19:39:39.831287    2140 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 19:39:39.831287    2140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0865676s)
	I0524 19:39:39.831287    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:39:40.107884    2140 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 19:39:40.108061    2140 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 19:39:40.108061    2140 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0524 19:39:40.108155    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:39:40.219156    2140 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 19:39:40.219246    2140 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 19:39:40.219246    2140 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 19:39:40.219246    2140 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 19:39:40.219323    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:39:40.315013    2140 command_runner.go:130] ! W0524 19:39:40.306184    1451 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 19:39:40.320349    2140 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 19:39:40.320730    2140 api_server.go:52] waiting for apiserver process to appear ...
	I0524 19:39:40.331302    2140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:39:40.871575    2140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:39:41.374814    2140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:39:41.878015    2140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:39:42.369944    2140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:39:42.876663    2140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:39:43.369091    2140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:39:43.873975    2140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:39:43.936119    2140 command_runner.go:130] > 1794
	I0524 19:39:43.936208    2140 api_server.go:72] duration metric: took 3.6154786s to wait for apiserver process to appear ...
	I0524 19:39:43.936208    2140 api_server.go:88] waiting for apiserver healthz status ...
	I0524 19:39:43.936304    2140 api_server.go:253] Checking apiserver healthz at https://172.27.143.236:8443/healthz ...
	I0524 19:39:48.909090    2140 api_server.go:279] https://172.27.143.236:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0524 19:39:48.909090    2140 api_server.go:103] status: https://172.27.143.236:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0524 19:39:49.424786    2140 api_server.go:253] Checking apiserver healthz at https://172.27.143.236:8443/healthz ...
	I0524 19:39:49.435382    2140 api_server.go:279] https://172.27.143.236:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 19:39:49.435382    2140 api_server.go:103] status: https://172.27.143.236:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 19:39:49.916317    2140 api_server.go:253] Checking apiserver healthz at https://172.27.143.236:8443/healthz ...
	I0524 19:39:49.925033    2140 api_server.go:279] https://172.27.143.236:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 19:39:49.925033    2140 api_server.go:103] status: https://172.27.143.236:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 19:39:50.423820    2140 api_server.go:253] Checking apiserver healthz at https://172.27.143.236:8443/healthz ...
	I0524 19:39:50.436146    2140 api_server.go:279] https://172.27.143.236:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 19:39:50.436494    2140 api_server.go:103] status: https://172.27.143.236:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 19:39:50.916066    2140 api_server.go:253] Checking apiserver healthz at https://172.27.143.236:8443/healthz ...
	I0524 19:39:50.926743    2140 api_server.go:279] https://172.27.143.236:8443/healthz returned 200:
	ok
	I0524 19:39:50.927249    2140 round_trippers.go:463] GET https://172.27.143.236:8443/version
	I0524 19:39:50.927313    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:50.927313    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:50.927313    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:50.950529    2140 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0524 19:39:50.950529    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:50.950529    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:50 GMT
	I0524 19:39:50.950934    2140 round_trippers.go:580]     Audit-Id: a409523b-edf6-4b98-82de-2aaafcac38b7
	I0524 19:39:50.950934    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:50.950934    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:50.950934    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:50.950934    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:50.950934    2140 round_trippers.go:580]     Content-Length: 263
	I0524 19:39:50.951064    2140 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.2",
	  "gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
	  "gitTreeState": "clean",
	  "buildDate": "2023-05-17T14:13:28Z",
	  "goVersion": "go1.20.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0524 19:39:50.951238    2140 api_server.go:141] control plane version: v1.27.2
	I0524 19:39:50.951265    2140 api_server.go:131] duration metric: took 7.0150596s to wait for apiserver health ...
	I0524 19:39:50.951308    2140 cni.go:84] Creating CNI manager for ""
	I0524 19:39:50.951308    2140 cni.go:136] 3 nodes found, recommending kindnet
	I0524 19:39:50.959674    2140 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0524 19:39:50.971380    2140 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0524 19:39:50.988699    2140 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0524 19:39:50.988778    2140 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0524 19:39:50.988778    2140 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0524 19:39:50.988898    2140 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0524 19:39:50.988898    2140 command_runner.go:130] > Access: 2023-05-24 19:39:02.222529100 +0000
	I0524 19:39:50.988898    2140 command_runner.go:130] > Modify: 2023-05-20 04:10:39.000000000 +0000
	I0524 19:39:50.988898    2140 command_runner.go:130] > Change: 2023-05-24 19:38:51.773000000 +0000
	I0524 19:39:50.988898    2140 command_runner.go:130] >  Birth: -
	I0524 19:39:50.991143    2140 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0524 19:39:50.991202    2140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0524 19:39:51.101687    2140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0524 19:39:53.129201    2140 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0524 19:39:53.134745    2140 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0524 19:39:53.140733    2140 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0524 19:39:53.166423    2140 command_runner.go:130] > daemonset.apps/kindnet configured
	I0524 19:39:53.173292    2140 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.071605s)
	I0524 19:39:53.173363    2140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 19:39:53.173571    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods
	I0524 19:39:53.173571    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.173571    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.173650    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.185227    2140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0524 19:39:53.185227    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.185227    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.185227    2140 round_trippers.go:580]     Audit-Id: 567b4dbc-a0a3-4830-bcc9-54276cca1921
	I0524 19:39:53.185227    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.185227    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.185227    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.185227    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.186216    2140 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1234"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84184 chars]
	I0524 19:39:53.193617    2140 system_pods.go:59] 12 kube-system pods found
	I0524 19:39:53.193617    2140 system_pods.go:61] "coredns-5d78c9869d-qhx48" [12d04c63-9898-4ccf-9e6d-92d8f3d086a4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0524 19:39:53.193617    2140 system_pods.go:61] "etcd-multinode-237000" [4b73c6ae-c8c9-444c-a5b5-a4bb2e724689] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0524 19:39:53.193617    2140 system_pods.go:61] "kindnet-9g7mc" [196b59a1-ab49-49e0-a26e-93c1f8b3f039] Running
	I0524 19:39:53.193617    2140 system_pods.go:61] "kindnet-fzbwb" [c04e7f28-21e2-4e88-9ac3-00c6b8c208e0] Running
	I0524 19:39:53.193617    2140 system_pods.go:61] "kindnet-xgkpb" [92abc556-b250-4017-9b7c-0fed1aefe2d6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0524 19:39:53.193617    2140 system_pods.go:61] "kube-apiserver-multinode-237000" [46721249-af81-40ba-b756-6f9def350d07] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0524 19:39:53.193617    2140 system_pods.go:61] "kube-controller-manager-multinode-237000" [1ff7b570-afe4-4076-989f-d0377d04f9d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0524 19:39:53.193793    2140 system_pods.go:61] "kube-proxy-4qmlh" [3c277e06-12a4-451c-ad5b-15cc2bd169ad] Running
	I0524 19:39:53.193793    2140 system_pods.go:61] "kube-proxy-r6f94" [90a232cf-33b3-4e3b-82bf-9050d39109d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0524 19:39:53.193793    2140 system_pods.go:61] "kube-proxy-zglzj" [af1fb911-5877-4bcc-92f4-5571f489122c] Running
	I0524 19:39:53.193793    2140 system_pods.go:61] "kube-scheduler-multinode-237000" [a55c419f-1b04-4895-9fd5-02dd67cd888f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0524 19:39:53.193793    2140 system_pods.go:61] "storage-provisioner" [6498131a-f2e2-4098-9a5f-6c277fae3747] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0524 19:39:53.193885    2140 system_pods.go:74] duration metric: took 20.5219ms to wait for pod list to return data ...
	I0524 19:39:53.193885    2140 node_conditions.go:102] verifying NodePressure condition ...
	I0524 19:39:53.194012    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes
	I0524 19:39:53.194039    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.194039    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.194039    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.201922    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:39:53.201922    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.201922    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.201922    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.201922    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.201922    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.201922    2140 round_trippers.go:580]     Audit-Id: 194ef554-afa1-41ec-8d46-31c012d0eb4b
	I0524 19:39:53.201922    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.202924    2140 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1234"},"items":[{"metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15941 chars]
	I0524 19:39:53.204797    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:39:53.204919    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:39:53.204919    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:39:53.204919    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:39:53.204919    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:39:53.204919    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:39:53.204919    2140 node_conditions.go:105] duration metric: took 11.034ms to run NodePressure ...
	I0524 19:39:53.205013    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 19:39:53.498517    2140 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0524 19:39:53.609422    2140 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0524 19:39:53.611704    2140 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0524 19:39:53.611923    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0524 19:39:53.611923    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.611923    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.611923    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.623908    2140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0524 19:39:53.623908    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.623908    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.623908    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.623908    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.623908    2140 round_trippers.go:580]     Audit-Id: 8dee412e-89d3-40a9-b8a3-536d59c9aa32
	I0524 19:39:53.623908    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.623908    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.624745    2140 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1237"},"items":[{"metadata":{"name":"etcd-multinode-237000","namespace":"kube-system","uid":"4b73c6ae-c8c9-444c-a5b5-a4bb2e724689","resourceVersion":"1215","creationTimestamp":"2023-05-24T19:39:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.143.236:2379","kubernetes.io/config.hash":"a462e4d9e600aa9f863cde3f240bd69a","kubernetes.io/config.mirror":"a462e4d9e600aa9f863cde3f240bd69a","kubernetes.io/config.seen":"2023-05-24T19:39:40.956259078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:39:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 29377 chars]
	I0524 19:39:53.626358    2140 kubeadm.go:787] kubelet initialised
	I0524 19:39:53.626405    2140 kubeadm.go:788] duration metric: took 14.7011ms waiting for restarted kubelet to initialise ...
	I0524 19:39:53.626474    2140 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:39:53.626624    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods
	I0524 19:39:53.626676    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.626676    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.626676    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.640689    2140 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0524 19:39:53.640689    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.640689    2140 round_trippers.go:580]     Audit-Id: f2c187fb-e240-4af1-a258-6f128559a506
	I0524 19:39:53.640689    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.640689    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.640689    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.640689    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.640689    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.641694    2140 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1237"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 84184 chars]
	I0524 19:39:53.645969    2140 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:39:53.646504    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:39:53.646504    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.646504    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.646504    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.649784    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:39:53.649784    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.649784    2140 round_trippers.go:580]     Audit-Id: 8e89e84c-f48c-4e4b-84bf-e433b0b8c846
	I0524 19:39:53.649894    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.649933    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.649933    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.649981    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.649981    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.650120    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:39:53.650750    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:53.650750    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.650832    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.650832    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.659186    2140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:39:53.659186    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.659186    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.659186    2140 round_trippers.go:580]     Audit-Id: 355cb89d-0d68-4825-adad-efd8acdb0f06
	I0524 19:39:53.659186    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.659186    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.659186    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.659186    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.659186    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:53.660227    2140 pod_ready.go:97] node "multinode-237000" hosting pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:53.660227    2140 pod_ready.go:81] duration metric: took 14.2583ms waiting for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	E0524 19:39:53.660227    2140 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-237000" hosting pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:53.660227    2140 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:39:53.660227    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-237000
	I0524 19:39:53.660227    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.660227    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.660227    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.663172    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:39:53.663172    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.663172    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.663172    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.663172    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.663172    2140 round_trippers.go:580]     Audit-Id: 100b1546-a9d5-49cf-b781-0789b2638053
	I0524 19:39:53.663172    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.663172    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.663172    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-237000","namespace":"kube-system","uid":"4b73c6ae-c8c9-444c-a5b5-a4bb2e724689","resourceVersion":"1215","creationTimestamp":"2023-05-24T19:39:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.143.236:2379","kubernetes.io/config.hash":"a462e4d9e600aa9f863cde3f240bd69a","kubernetes.io/config.mirror":"a462e4d9e600aa9f863cde3f240bd69a","kubernetes.io/config.seen":"2023-05-24T19:39:40.956259078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:39:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6097 chars]
	I0524 19:39:53.664031    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:53.664031    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.664237    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.664237    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.666997    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:39:53.666997    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.666997    2140 round_trippers.go:580]     Audit-Id: 9771ed29-68b3-4c53-b133-fdc7f15f2da1
	I0524 19:39:53.666997    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.666997    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.666997    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.666997    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.667437    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.667693    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:53.667693    2140 pod_ready.go:97] node "multinode-237000" hosting pod "etcd-multinode-237000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:53.667693    2140 pod_ready.go:81] duration metric: took 7.4657ms waiting for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	E0524 19:39:53.667693    2140 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-237000" hosting pod "etcd-multinode-237000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:53.667693    2140 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:39:53.668290    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-237000
	I0524 19:39:53.668290    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.668290    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.668290    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.679856    2140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0524 19:39:53.679856    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.680618    2140 round_trippers.go:580]     Audit-Id: 4c586125-9565-4e68-9af7-c5861c24afe1
	I0524 19:39:53.680618    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.680618    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.680618    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.680618    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.680618    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.680954    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-237000","namespace":"kube-system","uid":"46721249-af81-40ba-b756-6f9def350d07","resourceVersion":"1216","creationTimestamp":"2023-05-24T19:39:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.143.236:8443","kubernetes.io/config.hash":"4278bfa912c61c7340a8d49488981a6d","kubernetes.io/config.mirror":"4278bfa912c61c7340a8d49488981a6d","kubernetes.io/config.seen":"2023-05-24T19:39:40.956261577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:39:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7653 chars]
	I0524 19:39:53.681561    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:53.681561    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.681561    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.681561    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.684801    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:39:53.684801    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.684801    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.684801    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.684801    2140 round_trippers.go:580]     Audit-Id: 326ab777-27d3-440f-8622-6a56acb12906
	I0524 19:39:53.684801    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.684801    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.684801    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.684801    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:53.684801    2140 pod_ready.go:97] node "multinode-237000" hosting pod "kube-apiserver-multinode-237000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:53.684801    2140 pod_ready.go:81] duration metric: took 17.1081ms waiting for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	E0524 19:39:53.684801    2140 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-237000" hosting pod "kube-apiserver-multinode-237000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:53.685793    2140 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:39:53.685793    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-237000
	I0524 19:39:53.685793    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.685793    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.685793    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.688806    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:39:53.688806    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.688806    2140 round_trippers.go:580]     Audit-Id: 5cee7f8e-0a68-4aae-ad98-2d399aa260ae
	I0524 19:39:53.688806    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.688806    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.688806    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.688806    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.688806    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.688806    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-237000","namespace":"kube-system","uid":"1ff7b570-afe4-4076-989f-d0377d04f9d5","resourceVersion":"1208","creationTimestamp":"2023-05-24T19:27:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.mirror":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.seen":"2023-05-24T19:27:00.264375644Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7441 chars]
	I0524 19:39:53.690156    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:53.690258    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.690258    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.690258    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.692790    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:39:53.692790    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.693595    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.693595    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.693595    2140 round_trippers.go:580]     Audit-Id: 328c4ee0-1cc2-429e-bf70-95e2e7e0b019
	I0524 19:39:53.693595    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.693595    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.693595    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.693929    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:53.694362    2140 pod_ready.go:97] node "multinode-237000" hosting pod "kube-controller-manager-multinode-237000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:53.694422    2140 pod_ready.go:81] duration metric: took 8.6285ms waiting for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	E0524 19:39:53.694422    2140 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-237000" hosting pod "kube-controller-manager-multinode-237000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:53.694422    2140 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4qmlh" in "kube-system" namespace to be "Ready" ...
	I0524 19:39:53.817960    2140 request.go:628] Waited for 123.4775ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qmlh
	I0524 19:39:53.818285    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qmlh
	I0524 19:39:53.818285    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:53.818285    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:53.818347    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:53.822604    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:39:53.822604    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:53.822604    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:53.823023    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:53.823023    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:53.823062    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:53.823062    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:53 GMT
	I0524 19:39:53.823062    2140 round_trippers.go:580]     Audit-Id: a0ab24ce-2f55-4061-a054-e80c31c611d7
	I0524 19:39:53.823392    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4qmlh","generateName":"kube-proxy-","namespace":"kube-system","uid":"3c277e06-12a4-451c-ad5b-15cc2bd169ad","resourceVersion":"1123","creationTimestamp":"2023-05-24T19:32:20Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:32:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
	I0524 19:39:54.020639    2140 request.go:628] Waited for 196.4925ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:39:54.020849    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:39:54.020849    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:54.020849    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:54.020937    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:54.024865    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:39:54.024865    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:54.024865    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:54.024865    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:54.024865    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:54.024865    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:54.024865    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:54 GMT
	I0524 19:39:54.024865    2140 round_trippers.go:580]     Audit-Id: 19184f83-1568-4a32-86f9-2f45ca36d45e
	I0524 19:39:54.025893    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"05dd373e-a994-4789-af16-d10bfd472a98","resourceVersion":"1135","creationTimestamp":"2023-05-24T19:37:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4177 chars]
	I0524 19:39:54.026505    2140 pod_ready.go:92] pod "kube-proxy-4qmlh" in "kube-system" namespace has status "Ready":"True"
	I0524 19:39:54.026505    2140 pod_ready.go:81] duration metric: took 332.0838ms waiting for pod "kube-proxy-4qmlh" in "kube-system" namespace to be "Ready" ...
	I0524 19:39:54.026505    2140 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:39:54.226135    2140 request.go:628] Waited for 199.4261ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:39:54.226383    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:39:54.226383    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:54.226383    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:54.226383    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:54.231024    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:39:54.231202    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:54.231202    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:54.231202    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:54.231202    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:54.231202    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:54.231283    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:54 GMT
	I0524 19:39:54.231283    2140 round_trippers.go:580]     Audit-Id: 1fbb1af8-683c-4e8f-af4a-df6eac7327ba
	I0524 19:39:54.231339    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r6f94","generateName":"kube-proxy-","namespace":"kube-system","uid":"90a232cf-33b3-4e3b-82bf-9050d39109d1","resourceVersion":"1221","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5931 chars]
	I0524 19:39:54.415343    2140 request.go:628] Waited for 182.9474ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:54.415563    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:54.415671    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:54.415671    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:54.415671    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:54.420039    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:39:54.420951    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:54.420951    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:54 GMT
	I0524 19:39:54.420951    2140 round_trippers.go:580]     Audit-Id: 8f4186f2-dee2-460c-a8fd-89db3ea8ceb9
	I0524 19:39:54.420951    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:54.420951    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:54.420951    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:54.420951    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:54.421223    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:54.421615    2140 pod_ready.go:97] node "multinode-237000" hosting pod "kube-proxy-r6f94" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:54.421716    2140 pod_ready.go:81] duration metric: took 395.2107ms waiting for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	E0524 19:39:54.421716    2140 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-237000" hosting pod "kube-proxy-r6f94" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:54.421716    2140 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-zglzj" in "kube-system" namespace to be "Ready" ...
	I0524 19:39:54.619484    2140 request.go:628] Waited for 197.6199ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zglzj
	I0524 19:39:54.619690    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zglzj
	I0524 19:39:54.619690    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:54.619690    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:54.619690    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:54.627297    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:39:54.627297    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:54.627297    2140 round_trippers.go:580]     Audit-Id: efde24e5-d386-43e2-af29-ecfced82cc05
	I0524 19:39:54.627297    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:54.627297    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:54.627297    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:54.627297    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:54.627297    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:54 GMT
	I0524 19:39:54.629166    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zglzj","generateName":"kube-proxy-","namespace":"kube-system","uid":"af1fb911-5877-4bcc-92f4-5571f489122c","resourceVersion":"550","creationTimestamp":"2023-05-24T19:29:22Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5543 chars]
	I0524 19:39:54.824029    2140 request.go:628] Waited for 193.9757ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:39:54.824243    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:39:54.824243    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:54.824243    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:54.824381    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:54.827635    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:39:54.828449    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:54.828449    2140 round_trippers.go:580]     Audit-Id: 1a4def19-d3dc-4fce-9600-cb36fe352011
	I0524 19:39:54.828449    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:54.828536    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:54.828536    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:54.828536    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:54.828536    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:54 GMT
	I0524 19:39:54.828749    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"958","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4359 chars]
	I0524 19:39:54.828749    2140 pod_ready.go:92] pod "kube-proxy-zglzj" in "kube-system" namespace has status "Ready":"True"
	I0524 19:39:54.828749    2140 pod_ready.go:81] duration metric: took 407.0329ms waiting for pod "kube-proxy-zglzj" in "kube-system" namespace to be "Ready" ...
	I0524 19:39:54.829333    2140 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:39:55.013497    2140 request.go:628] Waited for 183.8696ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:39:55.013578    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:39:55.013578    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:55.013648    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:55.013648    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:55.018090    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:39:55.018899    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:55.018899    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:55.018899    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:55 GMT
	I0524 19:39:55.018899    2140 round_trippers.go:580]     Audit-Id: cb8a60c6-ecf1-42c0-a27b-5e4cecb383b7
	I0524 19:39:55.018899    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:55.018987    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:55.018987    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:55.019144    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-237000","namespace":"kube-system","uid":"a55c419f-1b04-4895-9fd5-02dd67cd888f","resourceVersion":"1213","creationTimestamp":"2023-05-24T19:27:12Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.mirror":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.seen":"2023-05-24T19:27:12.143961333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5153 chars]
	I0524 19:39:55.217407    2140 request.go:628] Waited for 197.8506ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:55.217651    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:55.217651    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:55.217651    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:55.217651    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:55.222250    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:39:55.222250    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:55.222250    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:55.222250    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:55 GMT
	I0524 19:39:55.222250    2140 round_trippers.go:580]     Audit-Id: 12e2a4be-6cc5-4e5c-951b-c1d7a458c008
	I0524 19:39:55.223290    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:55.223341    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:55.223341    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:55.223569    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:55.224052    2140 pod_ready.go:97] node "multinode-237000" hosting pod "kube-scheduler-multinode-237000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:55.224132    2140 pod_ready.go:81] duration metric: took 394.7987ms waiting for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	E0524 19:39:55.224132    2140 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-237000" hosting pod "kube-scheduler-multinode-237000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000" has status "Ready":"False"
	I0524 19:39:55.224132    2140 pod_ready.go:38] duration metric: took 1.5976063s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:39:55.224215    2140 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 19:39:55.247964    2140 command_runner.go:130] > -16
	I0524 19:39:55.248030    2140 ops.go:34] apiserver oom_adj: -16
	I0524 19:39:55.248030    2140 kubeadm.go:640] restartCluster took 17.2058156s
	I0524 19:39:55.248030    2140 kubeadm.go:406] StartCluster complete in 17.2804199s
	I0524 19:39:55.248100    2140 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:39:55.248238    2140 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:39:55.249606    2140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:39:55.250812    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 19:39:55.250993    2140 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0524 19:39:55.256974    2140 out.go:177] * Enabled addons: 
	I0524 19:39:55.251827    2140 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:39:55.259283    2140 addons.go:499] enable addons completed in 8.3132ms: enabled=[]
	I0524 19:39:55.263891    2140 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:39:55.264656    2140 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.143.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:39:55.266200    2140 cert_rotation.go:137] Starting client certificate rotation controller
	I0524 19:39:55.266438    2140 round_trippers.go:463] GET https://172.27.143.236:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0524 19:39:55.266438    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:55.266438    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:55.266438    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:55.281991    2140 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0524 19:39:55.282642    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:55.282642    2140 round_trippers.go:580]     Audit-Id: f6dd8402-8e80-4026-a728-b80e1ffecbc9
	I0524 19:39:55.282642    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:55.282642    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:55.282719    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:55.282719    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:55.282750    2140 round_trippers.go:580]     Content-Length: 292
	I0524 19:39:55.282750    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:55 GMT
	I0524 19:39:55.282750    2140 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9016559d-3c59-4f76-8961-1b5665cb8836","resourceVersion":"1236","creationTimestamp":"2023-05-24T19:27:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0524 19:39:55.282903    2140 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-237000" context rescaled to 1 replicas
	I0524 19:39:55.282903    2140 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.143.236 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 19:39:55.286118    2140 out.go:177] * Verifying Kubernetes components...
	I0524 19:39:55.299352    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:39:55.422725    2140 command_runner.go:130] > apiVersion: v1
	I0524 19:39:55.422725    2140 command_runner.go:130] > data:
	I0524 19:39:55.422725    2140 command_runner.go:130] >   Corefile: |
	I0524 19:39:55.422725    2140 command_runner.go:130] >     .:53 {
	I0524 19:39:55.422725    2140 command_runner.go:130] >         log
	I0524 19:39:55.422725    2140 command_runner.go:130] >         errors
	I0524 19:39:55.422725    2140 command_runner.go:130] >         health {
	I0524 19:39:55.422725    2140 command_runner.go:130] >            lameduck 5s
	I0524 19:39:55.422725    2140 command_runner.go:130] >         }
	I0524 19:39:55.422725    2140 command_runner.go:130] >         ready
	I0524 19:39:55.422725    2140 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0524 19:39:55.422725    2140 command_runner.go:130] >            pods insecure
	I0524 19:39:55.422725    2140 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0524 19:39:55.422725    2140 command_runner.go:130] >            ttl 30
	I0524 19:39:55.422725    2140 command_runner.go:130] >         }
	I0524 19:39:55.422725    2140 command_runner.go:130] >         prometheus :9153
	I0524 19:39:55.422725    2140 command_runner.go:130] >         hosts {
	I0524 19:39:55.422725    2140 command_runner.go:130] >            172.27.128.1 host.minikube.internal
	I0524 19:39:55.422725    2140 command_runner.go:130] >            fallthrough
	I0524 19:39:55.422725    2140 command_runner.go:130] >         }
	I0524 19:39:55.422725    2140 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0524 19:39:55.422725    2140 command_runner.go:130] >            max_concurrent 1000
	I0524 19:39:55.422725    2140 command_runner.go:130] >         }
	I0524 19:39:55.422725    2140 command_runner.go:130] >         cache 30
	I0524 19:39:55.422725    2140 command_runner.go:130] >         loop
	I0524 19:39:55.422725    2140 command_runner.go:130] >         reload
	I0524 19:39:55.422725    2140 command_runner.go:130] >         loadbalance
	I0524 19:39:55.422725    2140 command_runner.go:130] >     }
	I0524 19:39:55.422725    2140 command_runner.go:130] > kind: ConfigMap
	I0524 19:39:55.422725    2140 command_runner.go:130] > metadata:
	I0524 19:39:55.422725    2140 command_runner.go:130] >   creationTimestamp: "2023-05-24T19:27:11Z"
	I0524 19:39:55.422725    2140 command_runner.go:130] >   name: coredns
	I0524 19:39:55.422725    2140 command_runner.go:130] >   namespace: kube-system
	I0524 19:39:55.422725    2140 command_runner.go:130] >   resourceVersion: "369"
	I0524 19:39:55.422725    2140 command_runner.go:130] >   uid: 51dbda2b-4334-4537-869d-860680c0ab81
	I0524 19:39:55.422725    2140 node_ready.go:35] waiting up to 6m0s for node "multinode-237000" to be "Ready" ...
	I0524 19:39:55.423366    2140 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0524 19:39:55.423366    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:55.423366    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:55.423366    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:55.423366    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:55.427012    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:39:55.427564    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:55.427564    2140 round_trippers.go:580]     Audit-Id: 3df4d1d1-6007-4736-b051-a254e5bf37cd
	I0524 19:39:55.427564    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:55.427564    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:55.427564    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:55.427637    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:55.427637    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:55 GMT
	I0524 19:39:55.427843    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:55.929282    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:55.929352    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:55.929352    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:55.929352    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:55.937799    2140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:39:55.937799    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:55.937799    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:55.937799    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:55.937799    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:55 GMT
	I0524 19:39:55.937799    2140 round_trippers.go:580]     Audit-Id: aead687a-9da7-4957-8501-95c70daebbf6
	I0524 19:39:55.937799    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:55.937799    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:55.937799    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:56.431595    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:56.431595    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:56.431595    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:56.431595    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:56.436211    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:39:56.436262    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:56.436262    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:56.436262    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:56.436262    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:56.436262    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:56.436262    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:56 GMT
	I0524 19:39:56.436326    2140 round_trippers.go:580]     Audit-Id: 0c4f43d6-e8ce-4eb0-ab59-4da85825117b
	I0524 19:39:56.436457    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:56.934900    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:56.934900    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:56.935019    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:56.935019    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:56.939420    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:39:56.939420    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:56.939420    2140 round_trippers.go:580]     Audit-Id: d97b309f-6a33-462b-9991-7c2d7a7a50e3
	I0524 19:39:56.939420    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:56.939420    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:56.939999    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:56.939999    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:56.940056    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:56 GMT
	I0524 19:39:56.940241    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:57.443073    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:57.443073    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:57.443073    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:57.443073    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:57.448678    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:39:57.448819    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:57.448819    2140 round_trippers.go:580]     Audit-Id: aeba43ef-9f52-45ca-9c57-f8fd21806e7a
	I0524 19:39:57.448819    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:57.448907    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:57.448907    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:57.448907    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:57.448907    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:57 GMT
	I0524 19:39:57.448975    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:57.449520    2140 node_ready.go:58] node "multinode-237000" has status "Ready":"False"
	I0524 19:39:57.929660    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:57.929711    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:57.929711    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:57.929711    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:57.933139    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:39:57.933139    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:57.933139    2140 round_trippers.go:580]     Audit-Id: 817c96dc-f46c-48a7-aef4-3d0226d42b61
	I0524 19:39:57.933590    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:57.933590    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:57.933626    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:57.933626    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:57.933626    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:57 GMT
	I0524 19:39:57.933848    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:58.431849    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:58.431913    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:58.431913    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:58.432006    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:58.435424    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:39:58.435424    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:58.436315    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:58.436315    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:58.436315    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:58.436315    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:58.436315    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:58 GMT
	I0524 19:39:58.436315    2140 round_trippers.go:580]     Audit-Id: 77d009d3-3df9-40ee-96a1-87f8047d929e
	I0524 19:39:58.436569    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:58.930746    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:58.930746    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:58.930746    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:58.930871    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:58.936655    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:39:58.936655    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:58.936655    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:58.936655    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:58.936655    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:58 GMT
	I0524 19:39:58.936655    2140 round_trippers.go:580]     Audit-Id: 8cc3b516-1c73-4d7f-a8d4-44318b7989b4
	I0524 19:39:58.936655    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:58.936923    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:58.937177    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:59.436608    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:59.436720    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:59.436720    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:59.436720    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:59.448137    2140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0524 19:39:59.448137    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:59.448660    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:59.448660    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:59.448660    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:59.448660    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:59.448660    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:59 GMT
	I0524 19:39:59.448660    2140 round_trippers.go:580]     Audit-Id: e6e46bb2-bdf4-41a1-81dd-c34b59966aa0
	I0524 19:39:59.448852    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:59.936548    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:39:59.936636    2140 round_trippers.go:469] Request Headers:
	I0524 19:39:59.936636    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:39:59.936704    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:39:59.941442    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:39:59.941442    2140 round_trippers.go:577] Response Headers:
	I0524 19:39:59.941977    2140 round_trippers.go:580]     Audit-Id: a3a48283-39f9-4a73-ab7a-7824bacef260
	I0524 19:39:59.941977    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:39:59.941977    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:39:59.941977    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:39:59.941977    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:39:59.941977    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:39:59 GMT
	I0524 19:39:59.942114    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:39:59.942665    2140 node_ready.go:58] node "multinode-237000" has status "Ready":"False"
	I0524 19:40:00.441146    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:00.441146    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:00.441146    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:00.441146    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:00.446252    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:40:00.446252    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:00.446252    2140 round_trippers.go:580]     Audit-Id: a2d019c8-cdb2-4fc6-aebd-60fffc21a3d7
	I0524 19:40:00.446252    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:00.446252    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:00.446252    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:00.446252    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:00.446252    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:00 GMT
	I0524 19:40:00.446252    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:40:00.941459    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:00.941459    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:00.941655    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:00.941655    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:00.946424    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:00.946424    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:00.946424    2140 round_trippers.go:580]     Audit-Id: 6c78507e-02c3-428a-a725-1a460f4304ea
	I0524 19:40:00.946424    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:00.946424    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:00.946424    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:00.946424    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:00.946424    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:00 GMT
	I0524 19:40:00.947010    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:40:01.442525    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:01.442525    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:01.442604    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:01.442604    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:01.452064    2140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0524 19:40:01.452064    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:01.452064    2140 round_trippers.go:580]     Audit-Id: e32ace70-6ab2-4a97-ab94-3ab68f23ced7
	I0524 19:40:01.452064    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:01.452239    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:01.452239    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:01.452239    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:01.452239    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:01 GMT
	I0524 19:40:01.452599    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1164","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5367 chars]
	I0524 19:40:01.937722    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:01.937770    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:01.937770    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:01.937770    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:01.941314    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:01.941314    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:01.941405    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:01.941405    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:01.941405    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:01 GMT
	I0524 19:40:01.941405    2140 round_trippers.go:580]     Audit-Id: 02385062-327b-44df-9fd6-b47ddc7126e2
	I0524 19:40:01.941405    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:01.941405    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:01.941737    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:01.942247    2140 node_ready.go:49] node "multinode-237000" has status "Ready":"True"
	I0524 19:40:01.942308    2140 node_ready.go:38] duration metric: took 6.519031s waiting for node "multinode-237000" to be "Ready" ...
	I0524 19:40:01.942308    2140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:40:01.942370    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods
	I0524 19:40:01.942370    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:01.942370    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:01.942370    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:01.946987    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:01.946987    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:01.947901    2140 round_trippers.go:580]     Audit-Id: c7f3bf84-e17a-4e01-8a29-8134795642fd
	I0524 19:40:01.947901    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:01.947901    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:01.947901    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:01.947901    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:01.947996    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:01 GMT
	I0524 19:40:01.953084    2140 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1272"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83106 chars]
	I0524 19:40:01.957646    2140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:01.957646    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:01.957646    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:01.957646    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:01.957646    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:01.962106    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:01.962195    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:01.962195    2140 round_trippers.go:580]     Audit-Id: ba7b0bb8-0bb7-49ba-be18-29f8359c568a
	I0524 19:40:01.962195    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:01.962195    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:01.962260    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:01.962260    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:01.962260    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:01 GMT
	I0524 19:40:01.962427    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:01.962795    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:01.962795    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:01.962795    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:01.962795    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:01.966718    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:01.966718    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:01.966718    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:01.966799    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:01.966799    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:01.966799    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:01.966835    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:01 GMT
	I0524 19:40:01.966835    2140 round_trippers.go:580]     Audit-Id: 442e0874-572a-47fe-a121-ce0796a5f370
	I0524 19:40:01.967116    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:02.472486    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:02.472593    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:02.472593    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:02.472593    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:02.476900    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:02.477951    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:02.478002    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:02.478002    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:02.478002    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:02 GMT
	I0524 19:40:02.478002    2140 round_trippers.go:580]     Audit-Id: 8ec474c9-54a1-4455-845f-4884b2695242
	I0524 19:40:02.478002    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:02.478085    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:02.478145    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:02.479007    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:02.479007    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:02.479007    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:02.479007    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:02.481750    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:40:02.481750    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:02.482617    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:02.482617    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:02 GMT
	I0524 19:40:02.482617    2140 round_trippers.go:580]     Audit-Id: 2c26af18-a928-41b9-b05f-7a641f4bcfa2
	I0524 19:40:02.482617    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:02.482617    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:02.482693    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:02.482763    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:02.974701    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:02.974748    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:02.974841    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:02.974841    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:02.979070    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:02.979452    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:02.979452    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:02.979549    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:02.979549    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:02 GMT
	I0524 19:40:02.979549    2140 round_trippers.go:580]     Audit-Id: 3eb4da1f-8f8d-43f7-80f0-fa7636010e2f
	I0524 19:40:02.979549    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:02.979549    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:02.979722    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:02.980738    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:02.980824    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:02.980824    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:02.980824    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:02.984057    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:02.984057    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:02.984057    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:02.984057    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:02.984057    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:02.984057    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:02.984057    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:02 GMT
	I0524 19:40:02.984057    2140 round_trippers.go:580]     Audit-Id: 4e6d672b-25ec-404f-9a35-6906cabc6d32
	I0524 19:40:02.984057    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:03.474312    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:03.474425    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:03.474425    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:03.474508    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:03.479257    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:03.479257    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:03.479257    2140 round_trippers.go:580]     Audit-Id: 72a15850-60a8-4d9a-8d6c-10bfe11f26a0
	I0524 19:40:03.479257    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:03.479480    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:03.479480    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:03.479480    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:03.479480    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:03 GMT
	I0524 19:40:03.479739    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:03.480569    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:03.480569    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:03.480569    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:03.480569    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:03.487855    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:40:03.487855    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:03.487855    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:03 GMT
	I0524 19:40:03.487855    2140 round_trippers.go:580]     Audit-Id: 679ebf9d-e715-4c65-a2df-8793f5f9002f
	I0524 19:40:03.487855    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:03.487855    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:03.487855    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:03.487855    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:03.489242    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:03.973152    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:03.973252    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:03.973252    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:03.973252    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:03.976603    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:03.976603    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:03.977637    2140 round_trippers.go:580]     Audit-Id: 52d74e91-6adf-4e58-811e-280b8daec4ce
	I0524 19:40:03.977661    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:03.977661    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:03.977661    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:03.977720    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:03.977720    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:03 GMT
	I0524 19:40:03.977921    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:03.978669    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:03.978732    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:03.978732    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:03.978732    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:03.982509    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:03.982509    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:03.982509    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:03.982509    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:03.982509    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:03 GMT
	I0524 19:40:03.982509    2140 round_trippers.go:580]     Audit-Id: a7727db9-946d-4be8-8715-a168b5ad6d19
	I0524 19:40:03.982509    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:03.982509    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:03.982509    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:03.983530    2140 pod_ready.go:102] pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace has status "Ready":"False"
	I0524 19:40:04.476335    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:04.476335    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:04.476464    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:04.476464    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:04.479825    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:04.480845    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:04.480875    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:04 GMT
	I0524 19:40:04.480875    2140 round_trippers.go:580]     Audit-Id: df020565-f3ca-4f8c-86d9-14442f890eba
	I0524 19:40:04.480875    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:04.480875    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:04.480875    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:04.480946    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:04.481007    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:04.481724    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:04.481796    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:04.481796    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:04.481796    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:04.484704    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:40:04.484704    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:04.484704    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:04.484704    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:04 GMT
	I0524 19:40:04.484704    2140 round_trippers.go:580]     Audit-Id: 20ec0ceb-dcf9-4966-8beb-be112b687358
	I0524 19:40:04.484704    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:04.484704    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:04.484704    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:04.485997    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:04.976771    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:04.976857    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:04.976857    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:04.976936    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:04.981190    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:04.981190    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:04.981190    2140 round_trippers.go:580]     Audit-Id: c8403b45-d021-4aa5-a3cc-276301dd1f6c
	I0524 19:40:04.982198    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:04.982198    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:04.982224    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:04.982224    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:04.982224    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:04 GMT
	I0524 19:40:04.982405    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:04.983115    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:04.983115    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:04.983183    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:04.983183    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:04.986522    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:04.986522    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:04.986522    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:04.986522    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:04.986522    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:04.986974    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:04 GMT
	I0524 19:40:04.986974    2140 round_trippers.go:580]     Audit-Id: 2eaa2049-6ac0-47a1-a01b-6e0e190908fe
	I0524 19:40:04.986974    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:04.988508    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:05.477508    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:05.477508    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:05.477508    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:05.477580    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:05.481294    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:05.482045    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:05.482107    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:05.482107    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:05.482107    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:05.482107    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:05 GMT
	I0524 19:40:05.482107    2140 round_trippers.go:580]     Audit-Id: 28f09819-cc86-4314-9767-ad549591c4b9
	I0524 19:40:05.482107    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:05.482651    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:05.483431    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:05.483496    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:05.483496    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:05.483496    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:05.486703    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:05.486703    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:05.487132    2140 round_trippers.go:580]     Audit-Id: 15f9828b-3ca6-4092-bfae-f0f95ab09c73
	I0524 19:40:05.487273    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:05.487347    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:05.487347    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:05.487347    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:05.487347    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:05 GMT
	I0524 19:40:05.487347    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:05.975042    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:05.975113    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:05.975113    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:05.975185    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:05.985876    2140 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0524 19:40:05.985876    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:05.985876    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:05.985876    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:05.985876    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:05.985876    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:05.986337    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:05 GMT
	I0524 19:40:05.986337    2140 round_trippers.go:580]     Audit-Id: 0ec19c6a-81b2-4cd2-bd79-c39c261391ae
	I0524 19:40:05.986412    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:05.986412    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:05.986412    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:05.986412    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:05.986412    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:05.990788    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:05.990788    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:05.990788    2140 round_trippers.go:580]     Audit-Id: 9fdcfee7-146f-4978-bddd-0f579089a672
	I0524 19:40:05.990788    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:05.990788    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:05.990788    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:05.990788    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:05.990788    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:05 GMT
	I0524 19:40:05.991986    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:05.991986    2140 pod_ready.go:102] pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace has status "Ready":"False"
	I0524 19:40:06.479715    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:06.479882    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:06.479932    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:06.479932    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:06.484332    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:06.485324    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:06.485392    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:06.485392    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:06.485392    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:06.485392    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:06.485392    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:06 GMT
	I0524 19:40:06.485450    2140 round_trippers.go:580]     Audit-Id: 46a807df-be1e-4f38-b09d-1498c901fa43
	I0524 19:40:06.485670    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:06.486467    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:06.486467    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:06.486467    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:06.486467    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:06.488844    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:40:06.489903    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:06.489903    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:06.489903    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:06 GMT
	I0524 19:40:06.490021    2140 round_trippers.go:580]     Audit-Id: 4d94c327-c856-4b86-bd31-b3cdee697a93
	I0524 19:40:06.490021    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:06.490021    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:06.490075    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:06.490177    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:06.971139    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:06.971217    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:06.971262    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:06.971262    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:06.977707    2140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0524 19:40:06.977707    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:06.978196    2140 round_trippers.go:580]     Audit-Id: cde54a89-57b7-49c5-b004-92014e47fb56
	I0524 19:40:06.978196    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:06.978196    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:06.978196    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:06.978288    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:06.978288    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:06 GMT
	I0524 19:40:06.978523    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:06.979747    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:06.979801    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:06.979847    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:06.979847    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:06.986097    2140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0524 19:40:06.986097    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:06.986097    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:06.986097    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:06 GMT
	I0524 19:40:06.986097    2140 round_trippers.go:580]     Audit-Id: f8bebef1-b9a8-41bf-90e6-e8dce2423159
	I0524 19:40:06.986097    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:06.986097    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:06.986097    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:06.986097    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:07.479592    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:07.479676    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:07.479676    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:07.479754    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:07.485066    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:40:07.485066    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:07.485066    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:07 GMT
	I0524 19:40:07.485298    2140 round_trippers.go:580]     Audit-Id: bf6838c6-c943-4423-be9b-d087dc7963e0
	I0524 19:40:07.485298    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:07.485298    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:07.485298    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:07.485298    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:07.485574    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:07.486234    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:07.486234    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:07.486234    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:07.486234    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:07.489533    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:07.489533    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:07.489533    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:07 GMT
	I0524 19:40:07.489533    2140 round_trippers.go:580]     Audit-Id: 5c9db43f-c51b-4e84-936e-ffc73f1ff167
	I0524 19:40:07.489533    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:07.489533    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:07.489533    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:07.489533    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:07.490246    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:07.972772    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:07.972772    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:07.972772    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:07.972772    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:07.978020    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:40:07.978020    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:07.978020    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:07.978020    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:07.978020    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:07.978020    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:07.978435    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:07 GMT
	I0524 19:40:07.978435    2140 round_trippers.go:580]     Audit-Id: 61a4e444-10b7-48e3-ad1d-6482a025e07d
	I0524 19:40:07.978582    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1220","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6546 chars]
	I0524 19:40:07.979316    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:07.979316    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:07.979316    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:07.979316    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:07.982562    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:07.982562    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:07.982562    2140 round_trippers.go:580]     Audit-Id: 576871f6-6fcd-4cd4-8dc0-899e1f9f0151
	I0524 19:40:07.982562    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:07.982562    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:07.982917    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:07.982917    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:07.982917    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:07 GMT
	I0524 19:40:07.983174    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:08.476074    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:40:08.476074    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.476074    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.476143    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.480403    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:08.480467    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.480467    2140 round_trippers.go:580]     Audit-Id: 93309a95-a04f-4c69-9bf4-c3004f115d1f
	I0524 19:40:08.480467    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.480467    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.480467    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.480467    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.480467    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.480766    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1290","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0524 19:40:08.481498    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:08.481498    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.481555    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.481555    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.484727    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:40:08.484727    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.484727    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.484727    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.484727    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.484727    2140 round_trippers.go:580]     Audit-Id: 349a3ad0-f526-4529-9111-3e4593d6d5e2
	I0524 19:40:08.484727    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.484727    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.484727    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:08.485456    2140 pod_ready.go:92] pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace has status "Ready":"True"
	I0524 19:40:08.485456    2140 pod_ready.go:81] duration metric: took 6.5278131s waiting for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.485456    2140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.485708    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-237000
	I0524 19:40:08.485708    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.485708    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.485708    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.488504    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:40:08.488504    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.488635    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.488635    2140 round_trippers.go:580]     Audit-Id: 7d998af3-cd12-40ff-a7a1-f44b0aa9470a
	I0524 19:40:08.488635    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.488635    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.488693    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.488693    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.488906    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-237000","namespace":"kube-system","uid":"4b73c6ae-c8c9-444c-a5b5-a4bb2e724689","resourceVersion":"1274","creationTimestamp":"2023-05-24T19:39:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.143.236:2379","kubernetes.io/config.hash":"a462e4d9e600aa9f863cde3f240bd69a","kubernetes.io/config.mirror":"a462e4d9e600aa9f863cde3f240bd69a","kubernetes.io/config.seen":"2023-05-24T19:39:40.956259078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:39:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0524 19:40:08.489372    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:08.489372    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.489443    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.489443    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.494126    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:08.494126    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.494205    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.494205    2140 round_trippers.go:580]     Audit-Id: 4f15667a-72d9-4f16-9780-055abd9a3dbe
	I0524 19:40:08.494205    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.494205    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.494205    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.494205    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.494441    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:08.494936    2140 pod_ready.go:92] pod "etcd-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:40:08.494936    2140 pod_ready.go:81] duration metric: took 9.4799ms waiting for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.495001    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.495052    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-237000
	I0524 19:40:08.495131    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.495151    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.495168    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.497157    2140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0524 19:40:08.498123    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.498123    2140 round_trippers.go:580]     Audit-Id: 3fb387aa-4673-48fb-bc6b-c3bb3330ff5b
	I0524 19:40:08.498123    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.498123    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.498123    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.498123    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.498123    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.498123    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-237000","namespace":"kube-system","uid":"46721249-af81-40ba-b756-6f9def350d07","resourceVersion":"1248","creationTimestamp":"2023-05-24T19:39:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.143.236:8443","kubernetes.io/config.hash":"4278bfa912c61c7340a8d49488981a6d","kubernetes.io/config.mirror":"4278bfa912c61c7340a8d49488981a6d","kubernetes.io/config.seen":"2023-05-24T19:39:40.956261577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:39:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0524 19:40:08.498892    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:08.498892    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.498892    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.498892    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.504189    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:40:08.504189    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.504189    2140 round_trippers.go:580]     Audit-Id: d2fd76a4-95e3-4d6f-b0a5-8484c4e3e078
	I0524 19:40:08.504189    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.504189    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.504189    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.504189    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.504189    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.505176    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:08.505987    2140 pod_ready.go:92] pod "kube-apiserver-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:40:08.505987    2140 pod_ready.go:81] duration metric: took 10.986ms waiting for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.505987    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.505987    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-237000
	I0524 19:40:08.505987    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.505987    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.505987    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.508587    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:40:08.508587    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.508587    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.508587    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.508587    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.509426    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.509426    2140 round_trippers.go:580]     Audit-Id: 1f04b88d-747a-4193-8199-d15d8317c30b
	I0524 19:40:08.509426    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.509712    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-237000","namespace":"kube-system","uid":"1ff7b570-afe4-4076-989f-d0377d04f9d5","resourceVersion":"1273","creationTimestamp":"2023-05-24T19:27:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.mirror":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.seen":"2023-05-24T19:27:00.264375644Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0524 19:40:08.510392    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:08.510392    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.510469    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.510469    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.519300    2140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:40:08.519300    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.519300    2140 round_trippers.go:580]     Audit-Id: 152623fb-15cd-41d0-b610-ac53165daacd
	I0524 19:40:08.519300    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.519300    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.519300    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.519300    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.519300    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.519300    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:08.520595    2140 pod_ready.go:92] pod "kube-controller-manager-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:40:08.520595    2140 pod_ready.go:81] duration metric: took 14.608ms waiting for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.520595    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4qmlh" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.520595    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qmlh
	I0524 19:40:08.520595    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.520595    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.520595    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.523249    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:40:08.523249    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.523249    2140 round_trippers.go:580]     Audit-Id: 419d66de-e796-4829-89dd-56332d89c209
	I0524 19:40:08.523249    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.523249    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.523249    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.523249    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.523249    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.524302    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4qmlh","generateName":"kube-proxy-","namespace":"kube-system","uid":"3c277e06-12a4-451c-ad5b-15cc2bd169ad","resourceVersion":"1123","creationTimestamp":"2023-05-24T19:32:20Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:32:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5749 chars]
	I0524 19:40:08.524833    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:40:08.524833    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.524833    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.524833    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.528170    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:40:08.528297    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.528297    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.528297    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.528297    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.528297    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.528297    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.528414    2140 round_trippers.go:580]     Audit-Id: 61c5dedf-2806-4d08-b39c-af092ccefb9d
	I0524 19:40:08.528644    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"05dd373e-a994-4789-af16-d10bfd472a98","resourceVersion":"1135","creationTimestamp":"2023-05-24T19:37:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4177 chars]
	I0524 19:40:08.528977    2140 pod_ready.go:92] pod "kube-proxy-4qmlh" in "kube-system" namespace has status "Ready":"True"
	I0524 19:40:08.529071    2140 pod_ready.go:81] duration metric: took 8.4759ms waiting for pod "kube-proxy-4qmlh" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.529071    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.677492    2140 request.go:628] Waited for 148.4211ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:40:08.677752    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:40:08.677752    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.677752    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.677752    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.681923    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:08.681923    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.681923    2140 round_trippers.go:580]     Audit-Id: 664983fe-1c16-4271-8a3e-99c8e13461d8
	I0524 19:40:08.681923    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.681923    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.682105    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.682105    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.682105    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.682174    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r6f94","generateName":"kube-proxy-","namespace":"kube-system","uid":"90a232cf-33b3-4e3b-82bf-9050d39109d1","resourceVersion":"1243","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I0524 19:40:08.879842    2140 request.go:628] Waited for 197.0834ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:08.880069    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:08.880069    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:08.880069    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:08.880169    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:08.884420    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:08.884420    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:08.884420    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:08.884492    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:08 GMT
	I0524 19:40:08.884492    2140 round_trippers.go:580]     Audit-Id: 685273a5-06eb-4ad9-8ef0-4e4a38e6ddac
	I0524 19:40:08.884492    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:08.884492    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:08.884556    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:08.884799    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:08.884988    2140 pod_ready.go:92] pod "kube-proxy-r6f94" in "kube-system" namespace has status "Ready":"True"
	I0524 19:40:08.884988    2140 pod_ready.go:81] duration metric: took 355.9165ms waiting for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:08.884988    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zglzj" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:09.083432    2140 request.go:628] Waited for 198.2447ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zglzj
	I0524 19:40:09.083524    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zglzj
	I0524 19:40:09.083524    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:09.083750    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:09.083750    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:09.087801    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:09.087801    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:09.087801    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:09.088198    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:09.088198    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:09 GMT
	I0524 19:40:09.088198    2140 round_trippers.go:580]     Audit-Id: 1e2b8e91-c859-414d-a1d1-24b0262ad233
	I0524 19:40:09.088198    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:09.088247    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:09.088421    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zglzj","generateName":"kube-proxy-","namespace":"kube-system","uid":"af1fb911-5877-4bcc-92f4-5571f489122c","resourceVersion":"550","creationTimestamp":"2023-05-24T19:29:22Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:req
uiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\ [truncated 5543 chars]
	I0524 19:40:09.285060    2140 request.go:628] Waited for 195.9643ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:40:09.285352    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:40:09.285352    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:09.285352    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:09.285352    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:09.293865    2140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:40:09.293865    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:09.293865    2140 round_trippers.go:580]     Audit-Id: c4d6eff1-e7c6-4a35-bce4-8fa309388600
	I0524 19:40:09.293865    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:09.293865    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:09.293865    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:09.293865    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:09.293865    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:09 GMT
	I0524 19:40:09.293865    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507","resourceVersion":"958","creationTimestamp":"2023-05-24T19:29:21Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4359 chars]
	I0524 19:40:09.293865    2140 pod_ready.go:92] pod "kube-proxy-zglzj" in "kube-system" namespace has status "Ready":"True"
	I0524 19:40:09.293865    2140 pod_ready.go:81] duration metric: took 408.8772ms waiting for pod "kube-proxy-zglzj" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:09.293865    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:09.488567    2140 request.go:628] Waited for 194.5351ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:40:09.488731    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:40:09.488731    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:09.488731    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:09.488731    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:09.493063    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:09.493063    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:09.493063    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:09.493063    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:09 GMT
	I0524 19:40:09.493476    2140 round_trippers.go:580]     Audit-Id: 95dff62c-2468-4dfb-a53a-a873a330cb7b
	I0524 19:40:09.493476    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:09.493476    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:09.493476    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:09.493687    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-237000","namespace":"kube-system","uid":"a55c419f-1b04-4895-9fd5-02dd67cd888f","resourceVersion":"1252","creationTimestamp":"2023-05-24T19:27:12Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.mirror":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.seen":"2023-05-24T19:27:12.143961333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0524 19:40:09.676965    2140 request.go:628] Waited for 182.1629ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:09.677190    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:40:09.677190    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:09.677253    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:09.677253    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:09.688888    2140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0524 19:40:09.688888    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:09.688888    2140 round_trippers.go:580]     Audit-Id: 2324129a-0080-4bcb-8925-1d33f12cd330
	I0524 19:40:09.688888    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:09.689241    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:09.689241    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:09.689280    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:09.689280    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:09 GMT
	I0524 19:40:09.689437    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:40:09.689983    2140 pod_ready.go:92] pod "kube-scheduler-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:40:09.689983    2140 pod_ready.go:81] duration metric: took 396.1183ms waiting for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:40:09.690045    2140 pod_ready.go:38] duration metric: took 7.7477395s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:40:09.690045    2140 api_server.go:52] waiting for apiserver process to appear ...
	I0524 19:40:09.700435    2140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:40:09.722522    2140 command_runner.go:130] > 1794
	I0524 19:40:09.722623    2140 api_server.go:72] duration metric: took 14.4394783s to wait for apiserver process to appear ...
	I0524 19:40:09.722623    2140 api_server.go:88] waiting for apiserver healthz status ...
	I0524 19:40:09.722623    2140 api_server.go:253] Checking apiserver healthz at https://172.27.143.236:8443/healthz ...
	I0524 19:40:09.734252    2140 api_server.go:279] https://172.27.143.236:8443/healthz returned 200:
	ok
	I0524 19:40:09.734343    2140 round_trippers.go:463] GET https://172.27.143.236:8443/version
	I0524 19:40:09.734395    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:09.734395    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:09.734395    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:09.736071    2140 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0524 19:40:09.736071    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:09.736071    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:09.736169    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:09.736169    2140 round_trippers.go:580]     Content-Length: 263
	I0524 19:40:09.736169    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:09 GMT
	I0524 19:40:09.736224    2140 round_trippers.go:580]     Audit-Id: 8910b3e3-0a4b-46b1-bfc8-004f74f9e3f6
	I0524 19:40:09.736224    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:09.736243    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:09.736243    2140 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.2",
	  "gitCommit": "7f6f68fdabc4df88cfea2dcf9a19b2b830f1e647",
	  "gitTreeState": "clean",
	  "buildDate": "2023-05-17T14:13:28Z",
	  "goVersion": "go1.20.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0524 19:40:09.736342    2140 api_server.go:141] control plane version: v1.27.2
	I0524 19:40:09.736342    2140 api_server.go:131] duration metric: took 13.719ms to wait for apiserver health ...
	I0524 19:40:09.736342    2140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 19:40:09.879182    2140 request.go:628] Waited for 142.4409ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods
	I0524 19:40:09.879182    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods
	I0524 19:40:09.879182    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:09.879182    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:09.879182    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:09.886994    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:40:09.886994    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:09.886994    2140 round_trippers.go:580]     Audit-Id: 7163e73b-e56e-4f32-aa76-6e2994c5114b
	I0524 19:40:09.886994    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:09.886994    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:09.886994    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:09.886994    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:09.886994    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:09 GMT
	I0524 19:40:09.889866    2140 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1296"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1290","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82568 chars]
	I0524 19:40:09.894179    2140 system_pods.go:59] 12 kube-system pods found
	I0524 19:40:09.894179    2140 system_pods.go:61] "coredns-5d78c9869d-qhx48" [12d04c63-9898-4ccf-9e6d-92d8f3d086a4] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "etcd-multinode-237000" [4b73c6ae-c8c9-444c-a5b5-a4bb2e724689] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "kindnet-9g7mc" [196b59a1-ab49-49e0-a26e-93c1f8b3f039] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "kindnet-fzbwb" [c04e7f28-21e2-4e88-9ac3-00c6b8c208e0] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "kindnet-xgkpb" [92abc556-b250-4017-9b7c-0fed1aefe2d6] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "kube-apiserver-multinode-237000" [46721249-af81-40ba-b756-6f9def350d07] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "kube-controller-manager-multinode-237000" [1ff7b570-afe4-4076-989f-d0377d04f9d5] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "kube-proxy-4qmlh" [3c277e06-12a4-451c-ad5b-15cc2bd169ad] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "kube-proxy-r6f94" [90a232cf-33b3-4e3b-82bf-9050d39109d1] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "kube-proxy-zglzj" [af1fb911-5877-4bcc-92f4-5571f489122c] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "kube-scheduler-multinode-237000" [a55c419f-1b04-4895-9fd5-02dd67cd888f] Running
	I0524 19:40:09.894179    2140 system_pods.go:61] "storage-provisioner" [6498131a-f2e2-4098-9a5f-6c277fae3747] Running
	I0524 19:40:09.894179    2140 system_pods.go:74] duration metric: took 157.8365ms to wait for pod list to return data ...
	I0524 19:40:09.894179    2140 default_sa.go:34] waiting for default service account to be created ...
	I0524 19:40:10.080624    2140 request.go:628] Waited for 186.2749ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/default/serviceaccounts
	I0524 19:40:10.080624    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/default/serviceaccounts
	I0524 19:40:10.080624    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:10.080624    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:10.080624    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:10.085212    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:10.085212    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:10.085212    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:10.085212    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:10.085625    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:10.085625    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:10.085625    2140 round_trippers.go:580]     Content-Length: 262
	I0524 19:40:10.085669    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:10 GMT
	I0524 19:40:10.085669    2140 round_trippers.go:580]     Audit-Id: 90ffc139-54f6-466d-b8bc-332624db6a98
	I0524 19:40:10.085669    2140 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1296"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"25405341-c1be-4363-86d7-6385725f43ef","resourceVersion":"324","creationTimestamp":"2023-05-24T19:27:24Z"}}]}
	I0524 19:40:10.085669    2140 default_sa.go:45] found service account: "default"
	I0524 19:40:10.085669    2140 default_sa.go:55] duration metric: took 191.4902ms for default service account to be created ...
	I0524 19:40:10.085669    2140 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 19:40:10.282288    2140 request.go:628] Waited for 196.6192ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods
	I0524 19:40:10.282804    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods
	I0524 19:40:10.282804    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:10.282804    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:10.282879    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:10.288221    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:40:10.289216    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:10.289216    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:10 GMT
	I0524 19:40:10.289216    2140 round_trippers.go:580]     Audit-Id: 0b5da0e3-7ed8-49ba-82d2-33773d90211b
	I0524 19:40:10.289216    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:10.289216    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:10.289216    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:10.289216    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:10.290057    2140 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1296"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1290","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82568 chars]
	I0524 19:40:10.295049    2140 system_pods.go:86] 12 kube-system pods found
	I0524 19:40:10.295049    2140 system_pods.go:89] "coredns-5d78c9869d-qhx48" [12d04c63-9898-4ccf-9e6d-92d8f3d086a4] Running
	I0524 19:40:10.295118    2140 system_pods.go:89] "etcd-multinode-237000" [4b73c6ae-c8c9-444c-a5b5-a4bb2e724689] Running
	I0524 19:40:10.295118    2140 system_pods.go:89] "kindnet-9g7mc" [196b59a1-ab49-49e0-a26e-93c1f8b3f039] Running
	I0524 19:40:10.295118    2140 system_pods.go:89] "kindnet-fzbwb" [c04e7f28-21e2-4e88-9ac3-00c6b8c208e0] Running
	I0524 19:40:10.295118    2140 system_pods.go:89] "kindnet-xgkpb" [92abc556-b250-4017-9b7c-0fed1aefe2d6] Running
	I0524 19:40:10.295118    2140 system_pods.go:89] "kube-apiserver-multinode-237000" [46721249-af81-40ba-b756-6f9def350d07] Running
	I0524 19:40:10.295118    2140 system_pods.go:89] "kube-controller-manager-multinode-237000" [1ff7b570-afe4-4076-989f-d0377d04f9d5] Running
	I0524 19:40:10.295118    2140 system_pods.go:89] "kube-proxy-4qmlh" [3c277e06-12a4-451c-ad5b-15cc2bd169ad] Running
	I0524 19:40:10.295187    2140 system_pods.go:89] "kube-proxy-r6f94" [90a232cf-33b3-4e3b-82bf-9050d39109d1] Running
	I0524 19:40:10.295187    2140 system_pods.go:89] "kube-proxy-zglzj" [af1fb911-5877-4bcc-92f4-5571f489122c] Running
	I0524 19:40:10.295187    2140 system_pods.go:89] "kube-scheduler-multinode-237000" [a55c419f-1b04-4895-9fd5-02dd67cd888f] Running
	I0524 19:40:10.295187    2140 system_pods.go:89] "storage-provisioner" [6498131a-f2e2-4098-9a5f-6c277fae3747] Running
	I0524 19:40:10.295187    2140 system_pods.go:126] duration metric: took 209.5186ms to wait for k8s-apps to be running ...
	I0524 19:40:10.295187    2140 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 19:40:10.305194    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:40:10.326327    2140 system_svc.go:56] duration metric: took 31.14ms WaitForService to wait for kubelet.
	I0524 19:40:10.327146    2140 kubeadm.go:581] duration metric: took 15.0440008s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 19:40:10.327222    2140 node_conditions.go:102] verifying NodePressure condition ...
	I0524 19:40:10.486089    2140 request.go:628] Waited for 158.6217ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes
	I0524 19:40:10.486201    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes
	I0524 19:40:10.486363    2140 round_trippers.go:469] Request Headers:
	I0524 19:40:10.486363    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:40:10.486363    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:40:10.491155    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:40:10.491155    2140 round_trippers.go:577] Response Headers:
	I0524 19:40:10.491155    2140 round_trippers.go:580]     Audit-Id: b0803abf-6105-44fa-b9a5-a733b3c106a1
	I0524 19:40:10.492191    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:40:10.492191    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:40:10.492191    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:40:10.492191    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:40:10.492191    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:40:10 GMT
	I0524 19:40:10.492343    2140 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1296"},"items":[{"metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15814 chars]
	I0524 19:40:10.493156    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:40:10.493156    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:40:10.493156    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:40:10.493156    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:40:10.493156    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:40:10.493156    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:40:10.493156    2140 node_conditions.go:105] duration metric: took 165.9337ms to run NodePressure ...
	I0524 19:40:10.493156    2140 start.go:228] waiting for startup goroutines ...
	I0524 19:40:10.493156    2140 start.go:233] waiting for cluster config update ...
	I0524 19:40:10.493156    2140 start.go:242] writing updated cluster config ...
	I0524 19:40:10.507011    2140 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:40:10.507011    2140 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:40:10.514413    2140 out.go:177] * Starting worker node multinode-237000-m02 in cluster multinode-237000
	I0524 19:40:10.517586    2140 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:40:10.517586    2140 cache.go:57] Caching tarball of preloaded images
	I0524 19:40:10.517586    2140 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0524 19:40:10.517586    2140 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 19:40:10.517586    2140 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:40:10.520365    2140 cache.go:195] Successfully downloaded all kic artifacts
	I0524 19:40:10.520365    2140 start.go:364] acquiring machines lock for multinode-237000-m02: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 19:40:10.520365    2140 start.go:368] acquired machines lock for "multinode-237000-m02" in 0s
	I0524 19:40:10.520788    2140 start.go:96] Skipping create...Using existing machine configuration
	I0524 19:40:10.520845    2140 fix.go:55] fixHost starting: m02
	I0524 19:40:10.521018    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:11.260419    2140 main.go:141] libmachine: [stdout =====>] : Off
	
	I0524 19:40:11.260559    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:11.260559    2140 fix.go:103] recreateIfNeeded on multinode-237000-m02: state=Stopped err=<nil>
	W0524 19:40:11.260559    2140 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 19:40:11.264565    2140 out.go:177] * Restarting existing hyperv VM for "multinode-237000-m02" ...
	I0524 19:40:11.267112    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-237000-m02
	I0524 19:40:12.915371    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:40:12.915439    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:12.915439    2140 main.go:141] libmachine: Waiting for host to start...
	I0524 19:40:12.915439    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:13.693693    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:13.693990    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:13.693990    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:14.768470    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:40:14.768740    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:15.776253    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:16.553660    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:16.553845    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:16.553924    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:17.674240    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:40:17.674273    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:18.689460    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:19.465199    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:19.465516    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:19.465554    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:20.543857    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:40:20.544006    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:21.558764    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:22.325344    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:22.325538    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:22.325629    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:23.383594    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:40:23.383648    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:24.386412    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:25.149523    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:25.149598    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:25.149643    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:26.230650    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:40:26.230806    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:27.244549    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:28.006354    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:28.006354    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:28.006546    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:29.078714    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:40:29.078886    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:30.086206    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:30.861535    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:30.861535    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:30.861640    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:31.920602    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:40:31.920775    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:32.934972    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:33.698393    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:33.698393    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:33.698508    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:34.788563    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:40:34.788615    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:35.799604    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:36.562517    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:36.562633    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:36.562706    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:37.651806    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:40:37.652043    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:38.662705    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:39.445282    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:39.445541    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:39.445541    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:40.587736    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:40:40.587876    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:40.590726    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:41.367733    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:41.368033    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:41.368033    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:42.478327    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:40:42.478471    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:42.478775    2140 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:40:42.480468    2140 machine.go:88] provisioning docker machine ...
	I0524 19:40:42.480468    2140 buildroot.go:166] provisioning hostname "multinode-237000-m02"
	I0524 19:40:42.480468    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:43.237969    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:43.237969    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:43.238137    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:44.330212    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:40:44.330212    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:44.335232    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:40:44.336370    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.80 22 <nil> <nil>}
	I0524 19:40:44.336435    2140 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-237000-m02 && echo "multinode-237000-m02" | sudo tee /etc/hostname
	I0524 19:40:44.500896    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-237000-m02
	
	I0524 19:40:44.501010    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:45.276147    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:45.276368    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:45.276445    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:46.368977    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:40:46.369150    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:46.376047    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:40:46.377024    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.80 22 <nil> <nil>}
	I0524 19:40:46.377024    2140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-237000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-237000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-237000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 19:40:46.543799    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 19:40:46.543799    2140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0524 19:40:46.543799    2140 buildroot.go:174] setting up certificates
	I0524 19:40:46.543799    2140 provision.go:83] configureAuth start
	I0524 19:40:46.543799    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:47.312490    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:47.312749    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:47.312820    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:48.388228    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:40:48.388472    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:48.388681    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:49.134291    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:49.134342    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:49.134483    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:50.212042    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:40:50.212223    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:50.212332    2140 provision.go:138] copyHostCerts
	I0524 19:40:50.212359    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0524 19:40:50.212949    2140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 19:40:50.212949    2140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 19:40:50.213310    2140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 19:40:50.214550    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0524 19:40:50.214852    2140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 19:40:50.214852    2140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 19:40:50.215093    2140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 19:40:50.216583    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0524 19:40:50.216946    2140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 19:40:50.216946    2140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 19:40:50.217232    2140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 19:40:50.219143    2140 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-237000-m02 san=[172.27.142.80 172.27.142.80 localhost 127.0.0.1 minikube multinode-237000-m02]
	I0524 19:40:50.320938    2140 provision.go:172] copyRemoteCerts
	I0524 19:40:50.331973    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 19:40:50.331973    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:51.076520    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:51.076606    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:51.076689    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:52.137321    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:40:52.137321    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:52.137820    2140 sshutil.go:53] new ssh client: &{IP:172.27.142.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\id_rsa Username:docker}
	I0524 19:40:52.251806    2140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.9197538s)
	I0524 19:40:52.251882    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0524 19:40:52.252465    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0524 19:40:52.298230    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0524 19:40:52.298715    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 19:40:52.348473    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0524 19:40:52.348473    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 19:40:52.404467    2140 provision.go:86] duration metric: configureAuth took 5.8606705s
	I0524 19:40:52.404467    2140 buildroot.go:189] setting minikube options for container-runtime
	I0524 19:40:52.405364    2140 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:40:52.405364    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:53.152602    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:53.152794    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:53.152865    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:54.209707    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:40:54.209707    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:54.213984    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:40:54.214768    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.80 22 <nil> <nil>}
	I0524 19:40:54.214768    2140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 19:40:54.372995    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 19:40:54.373053    2140 buildroot.go:70] root file system type: tmpfs
	I0524 19:40:54.373304    2140 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 19:40:54.373304    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:55.142904    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:55.142904    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:55.142904    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:56.212604    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:40:56.212604    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:56.217410    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:40:56.218371    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.80 22 <nil> <nil>}
	I0524 19:40:56.218488    2140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.143.236"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 19:40:56.382510    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.143.236
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 19:40:56.382621    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:40:57.142921    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:40:57.142921    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:57.143014    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:40:58.228555    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:40:58.228738    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:40:58.234284    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:40:58.235201    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.80 22 <nil> <nil>}
	I0524 19:40:58.235201    2140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 19:40:59.656015    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 19:40:59.656114    2140 machine.go:91] provisioned docker machine in 17.1756227s
	I0524 19:40:59.656114    2140 start.go:300] post-start starting for "multinode-237000-m02" (driver="hyperv")
	I0524 19:40:59.656114    2140 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 19:40:59.666932    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 19:40:59.666932    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:41:00.432453    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:00.433517    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:00.433517    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:01.564602    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:41:01.564675    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:01.565265    2140 sshutil.go:53] new ssh client: &{IP:172.27.142.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\id_rsa Username:docker}
	I0524 19:41:01.675807    2140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.0088302s)
	I0524 19:41:01.685090    2140 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 19:41:01.691424    2140 command_runner.go:130] > NAME=Buildroot
	I0524 19:41:01.691424    2140 command_runner.go:130] > VERSION=2021.02.12-1-g419828a-dirty
	I0524 19:41:01.691424    2140 command_runner.go:130] > ID=buildroot
	I0524 19:41:01.691424    2140 command_runner.go:130] > VERSION_ID=2021.02.12
	I0524 19:41:01.691488    2140 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0524 19:41:01.691554    2140 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 19:41:01.691580    2140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 19:41:01.692065    2140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 19:41:01.693119    2140 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 19:41:01.693119    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> /etc/ssl/certs/65602.pem
	I0524 19:41:01.701564    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 19:41:01.718684    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 19:41:01.762259    2140 start.go:303] post-start completed in 2.106146s
	I0524 19:41:01.762259    2140 fix.go:57] fixHost completed within 51.2414914s
	I0524 19:41:01.762259    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:41:02.555071    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:02.555071    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:02.555071    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:03.642017    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:41:03.642017    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:03.646075    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:41:03.646789    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.80 22 <nil> <nil>}
	I0524 19:41:03.646789    2140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 19:41:03.789937    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684957263.787243574
	
	I0524 19:41:03.789937    2140 fix.go:207] guest clock: 1684957263.787243574
	I0524 19:41:03.789937    2140 fix.go:220] Guest: 2023-05-24 19:41:03.787243574 +0000 UTC Remote: 2023-05-24 19:41:01.7622595 +0000 UTC m=+150.720442601 (delta=2.024984074s)
	I0524 19:41:03.789937    2140 fix.go:191] guest clock delta is within tolerance: 2.024984074s
	I0524 19:41:03.789937    2140 start.go:83] releasing machines lock for "multinode-237000-m02", held for 53.2693236s
	I0524 19:41:03.789937    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:41:04.572449    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:04.572449    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:04.572529    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:05.700752    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:41:05.700752    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:05.702817    2140 out.go:177] * Found network options:
	I0524 19:41:05.715259    2140 out.go:177]   - NO_PROXY=172.27.143.236
	W0524 19:41:05.718234    2140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0524 19:41:05.722520    2140 out.go:177]   - no_proxy=172.27.143.236
	W0524 19:41:05.726211    2140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0524 19:41:05.727796    2140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0524 19:41:05.730320    2140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 19:41:05.730451    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:41:05.739146    2140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0524 19:41:05.740154    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:41:06.539925    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:06.539925    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:06.539925    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:06.540060    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:06.540060    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:06.540060    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:07.699817    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:41:07.699893    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:07.699893    2140 sshutil.go:53] new ssh client: &{IP:172.27.142.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\id_rsa Username:docker}
	I0524 19:41:07.721974    2140 main.go:141] libmachine: [stdout =====>] : 172.27.142.80
	
	I0524 19:41:07.721974    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:07.722360    2140 sshutil.go:53] new ssh client: &{IP:172.27.142.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\id_rsa Username:docker}
	I0524 19:41:07.802366    2140 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0524 19:41:07.803384    2140 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (2.0642393s)
	W0524 19:41:07.803384    2140 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 19:41:07.814067    2140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 19:41:07.921482    2140 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0524 19:41:07.921482    2140 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1911633s)
	I0524 19:41:07.921482    2140 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0524 19:41:07.921482    2140 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 19:41:07.921482    2140 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:41:07.929532    2140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 19:41:07.972824    2140 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0524 19:41:07.972824    2140 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0524 19:41:07.972824    2140 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0524 19:41:07.972824    2140 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0524 19:41:07.972824    2140 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0524 19:41:07.972824    2140 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0524 19:41:07.972824    2140 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0524 19:41:07.972824    2140 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0524 19:41:07.972824    2140 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:41:07.972824    2140 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0524 19:41:07.972824    2140 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0524 19:41:07.973358    2140 docker.go:563] Images already preloaded, skipping extraction
	I0524 19:41:07.973403    2140 start.go:481] detecting cgroup driver to use...
	I0524 19:41:07.973609    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:41:08.004297    2140 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0524 19:41:08.015152    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 19:41:08.042905    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 19:41:08.060781    2140 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 19:41:08.074222    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 19:41:08.104050    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:41:08.133451    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 19:41:08.164797    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:41:08.194516    2140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 19:41:08.227987    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 19:41:08.258945    2140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 19:41:08.277481    2140 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0524 19:41:08.288894    2140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 19:41:08.314844    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:41:08.496293    2140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 19:41:08.525918    2140 start.go:481] detecting cgroup driver to use...
	I0524 19:41:08.535867    2140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 19:41:08.555194    2140 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0524 19:41:08.555194    2140 command_runner.go:130] > [Unit]
	I0524 19:41:08.555194    2140 command_runner.go:130] > Description=Docker Application Container Engine
	I0524 19:41:08.555194    2140 command_runner.go:130] > Documentation=https://docs.docker.com
	I0524 19:41:08.555194    2140 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0524 19:41:08.555194    2140 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0524 19:41:08.555194    2140 command_runner.go:130] > StartLimitBurst=3
	I0524 19:41:08.555194    2140 command_runner.go:130] > StartLimitIntervalSec=60
	I0524 19:41:08.555194    2140 command_runner.go:130] > [Service]
	I0524 19:41:08.555194    2140 command_runner.go:130] > Type=notify
	I0524 19:41:08.555194    2140 command_runner.go:130] > Restart=on-failure
	I0524 19:41:08.555194    2140 command_runner.go:130] > Environment=NO_PROXY=172.27.143.236
	I0524 19:41:08.555194    2140 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0524 19:41:08.555194    2140 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0524 19:41:08.555194    2140 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0524 19:41:08.555194    2140 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0524 19:41:08.555194    2140 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0524 19:41:08.555194    2140 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0524 19:41:08.555194    2140 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0524 19:41:08.555194    2140 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0524 19:41:08.555194    2140 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0524 19:41:08.555194    2140 command_runner.go:130] > ExecStart=
	I0524 19:41:08.556254    2140 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0524 19:41:08.556254    2140 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0524 19:41:08.556254    2140 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0524 19:41:08.556254    2140 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0524 19:41:08.556254    2140 command_runner.go:130] > LimitNOFILE=infinity
	I0524 19:41:08.556254    2140 command_runner.go:130] > LimitNPROC=infinity
	I0524 19:41:08.556254    2140 command_runner.go:130] > LimitCORE=infinity
	I0524 19:41:08.556254    2140 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0524 19:41:08.556254    2140 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0524 19:41:08.556254    2140 command_runner.go:130] > TasksMax=infinity
	I0524 19:41:08.556254    2140 command_runner.go:130] > TimeoutStartSec=0
	I0524 19:41:08.556254    2140 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0524 19:41:08.556254    2140 command_runner.go:130] > Delegate=yes
	I0524 19:41:08.556254    2140 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0524 19:41:08.556254    2140 command_runner.go:130] > KillMode=process
	I0524 19:41:08.556254    2140 command_runner.go:130] > [Install]
	I0524 19:41:08.556254    2140 command_runner.go:130] > WantedBy=multi-user.target
	I0524 19:41:08.566475    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 19:41:08.601090    2140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 19:41:08.638367    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 19:41:08.675528    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:41:08.712239    2140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 19:41:08.778812    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:41:08.807674    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:41:08.840511    2140 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0524 19:41:08.851582    2140 ssh_runner.go:195] Run: which cri-dockerd
	I0524 19:41:08.857588    2140 command_runner.go:130] > /usr/bin/cri-dockerd
	I0524 19:41:08.868216    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 19:41:08.885016    2140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 19:41:08.928090    2140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 19:41:09.110182    2140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 19:41:09.294920    2140 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 19:41:09.295016    2140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 19:41:09.340025    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:41:09.524051    2140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 19:41:11.215285    2140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6912344s)
	I0524 19:41:11.224724    2140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 19:41:11.405683    2140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 19:41:11.607684    2140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 19:41:11.800226    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:41:11.978123    2140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 19:41:12.017335    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:41:12.202161    2140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 19:41:12.316543    2140 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 19:41:12.327646    2140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 19:41:12.338438    2140 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0524 19:41:12.339359    2140 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0524 19:41:12.339359    2140 command_runner.go:130] > Device: 16h/22d	Inode: 904         Links: 1
	I0524 19:41:12.339359    2140 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0524 19:41:12.339359    2140 command_runner.go:130] > Access: 2023-05-24 19:41:12.222343125 +0000
	I0524 19:41:12.339359    2140 command_runner.go:130] > Modify: 2023-05-24 19:41:12.222343125 +0000
	I0524 19:41:12.339359    2140 command_runner.go:130] > Change: 2023-05-24 19:41:12.227342943 +0000
	I0524 19:41:12.339436    2140 command_runner.go:130] >  Birth: -
	I0524 19:41:12.339492    2140 start.go:549] Will wait 60s for crictl version
	I0524 19:41:12.350653    2140 ssh_runner.go:195] Run: which crictl
	I0524 19:41:12.356192    2140 command_runner.go:130] > /usr/bin/crictl
	I0524 19:41:12.366905    2140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 19:41:12.425816    2140 command_runner.go:130] > Version:  0.1.0
	I0524 19:41:12.425816    2140 command_runner.go:130] > RuntimeName:  docker
	I0524 19:41:12.425816    2140 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0524 19:41:12.425816    2140 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0524 19:41:12.425991    2140 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 19:41:12.434968    2140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 19:41:12.477664    2140 command_runner.go:130] > 20.10.23
	I0524 19:41:12.486391    2140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 19:41:12.529329    2140 command_runner.go:130] > 20.10.23
	I0524 19:41:12.534043    2140 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 19:41:12.538682    2140 out.go:177]   - env NO_PROXY=172.27.143.236
	I0524 19:41:12.541070    2140 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 19:41:12.546025    2140 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 19:41:12.546025    2140 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 19:41:12.546025    2140 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 19:41:12.546025    2140 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 19:41:12.549014    2140 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 19:41:12.549014    2140 ip.go:210] interface addr: 172.27.128.1/20
	I0524 19:41:12.559019    2140 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 19:41:12.566032    2140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:41:12.586173    2140 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000 for IP: 172.27.142.80
	I0524 19:41:12.586299    2140 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:41:12.587040    2140 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 19:41:12.587511    2140 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 19:41:12.587678    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0524 19:41:12.587961    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0524 19:41:12.588114    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0524 19:41:12.588344    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0524 19:41:12.588344    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 19:41:12.589147    2140 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 19:41:12.589251    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 19:41:12.589350    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 19:41:12.589350    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 19:41:12.590023    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 19:41:12.590528    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 19:41:12.590801    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> /usr/share/ca-certificates/65602.pem
	I0524 19:41:12.590938    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:41:12.591136    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem -> /usr/share/ca-certificates/6560.pem
	I0524 19:41:12.592054    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 19:41:12.637529    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 19:41:12.679684    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 19:41:12.721034    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 19:41:12.769373    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 19:41:12.808936    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 19:41:12.850589    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 19:41:12.912496    2140 ssh_runner.go:195] Run: openssl version
	I0524 19:41:12.919479    2140 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0524 19:41:12.928757    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 19:41:12.962021    2140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 19:41:12.968595    2140 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 19:41:12.968595    2140 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 19:41:12.978682    2140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 19:41:12.987542    2140 command_runner.go:130] > 3ec20f2e
	I0524 19:41:12.999183    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 19:41:13.031152    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 19:41:13.062698    2140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:41:13.071007    2140 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:41:13.071007    2140 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:41:13.085334    2140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:41:13.094212    2140 command_runner.go:130] > b5213941
	I0524 19:41:13.105202    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 19:41:13.135186    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 19:41:13.168282    2140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 19:41:13.174761    2140 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 19:41:13.175037    2140 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 19:41:13.185029    2140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 19:41:13.192552    2140 command_runner.go:130] > 51391683
	I0524 19:41:13.202743    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 19:41:13.232959    2140 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 19:41:13.241670    2140 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 19:41:13.241670    2140 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 19:41:13.249188    2140 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 19:41:13.294206    2140 command_runner.go:130] > cgroupfs
	I0524 19:41:13.295336    2140 cni.go:84] Creating CNI manager for ""
	I0524 19:41:13.295387    2140 cni.go:136] 3 nodes found, recommending kindnet
	I0524 19:41:13.295387    2140 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 19:41:13.295387    2140 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.142.80 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-237000 NodeName:multinode-237000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.143.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.142.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 19:41:13.295387    2140 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.142.80
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-237000-m02"
	  kubeletExtraArgs:
	    node-ip: 172.27.142.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.143.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 19:41:13.295387    2140 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-237000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.142.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 19:41:13.306508    2140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 19:41:13.325187    2140 command_runner.go:130] > kubeadm
	I0524 19:41:13.325187    2140 command_runner.go:130] > kubectl
	I0524 19:41:13.325187    2140 command_runner.go:130] > kubelet
	I0524 19:41:13.325187    2140 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 19:41:13.335059    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0524 19:41:13.353326    2140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0524 19:41:13.388178    2140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 19:41:13.427668    2140 ssh_runner.go:195] Run: grep 172.27.143.236	control-plane.minikube.internal$ /etc/hosts
	I0524 19:41:13.433372    2140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.143.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:41:13.454335    2140 host.go:66] Checking if "multinode-237000" exists ...
	I0524 19:41:13.455185    2140 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:41:13.455090    2140 start.go:301] JoinCluster: &{Name:multinode-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.143.236 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.142.80 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.134.200 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingr
ess:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:41:13.455185    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0524 19:41:13.455185    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:41:14.225071    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:14.225071    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:14.225071    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:15.331483    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:41:15.331666    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:15.332105    2140 sshutil.go:53] new ssh client: &{IP:172.27.143.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:41:15.535842    2140 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token xlf9lr.8rge92tqm3xr8ksd --discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 
	I0524 19:41:15.535931    2140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0": (2.0806587s)
	I0524 19:41:15.535931    2140 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.27.142.80 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0524 19:41:15.536045    2140 host.go:66] Checking if "multinode-237000" exists ...
	I0524 19:41:15.546807    2140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl drain multinode-237000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0524 19:41:15.546807    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:41:16.325861    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:16.325861    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:16.325932    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:17.408636    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:41:17.408699    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:17.409085    2140 sshutil.go:53] new ssh client: &{IP:172.27.143.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:41:17.590660    2140 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0524 19:41:17.679377    2140 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-9g7mc, kube-system/kube-proxy-zglzj
	I0524 19:41:20.711950    2140 command_runner.go:130] > node/multinode-237000-m02 cordoned
	I0524 19:41:20.712035    2140 command_runner.go:130] > pod "busybox-67b7f59bb-tdzj2" has DeletionTimestamp older than 1 seconds, skipping
	I0524 19:41:20.712035    2140 command_runner.go:130] > node/multinode-237000-m02 drained
	I0524 19:41:20.712035    2140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl drain multinode-237000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (5.1652303s)
	I0524 19:41:20.712145    2140 node.go:108] successfully drained node "m02"
	I0524 19:41:20.712845    2140 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:41:20.713582    2140 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.143.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:41:20.714585    2140 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0524 19:41:20.714585    2140 round_trippers.go:463] DELETE https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:20.714585    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:20.714585    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:20.714585    2140 round_trippers.go:473]     Content-Type: application/json
	I0524 19:41:20.714585    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:20.729295    2140 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0524 19:41:20.729295    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:20.729295    2140 round_trippers.go:580]     Audit-Id: cc8dc37f-7cae-4043-bc42-5a3a7c006c13
	I0524 19:41:20.729295    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:20.729295    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:20.729295    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:20.729295    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:20.729295    2140 round_trippers.go:580]     Content-Length: 171
	I0524 19:41:20.729823    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:20 GMT
	I0524 19:41:20.729887    2140 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-237000-m02","kind":"nodes","uid":"2000bb9b-161c-4dbf-bbb8-0177500de507"}}
	I0524 19:41:20.729963    2140 node.go:124] successfully deleted node "m02"
	I0524 19:41:20.729963    2140 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.27.142.80 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0524 19:41:20.730091    2140 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.27.142.80 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0524 19:41:20.730091    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xlf9lr.8rge92tqm3xr8ksd --discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-237000-m02"
	I0524 19:41:21.122201    2140 command_runner.go:130] ! W0524 19:41:21.119200    1323 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0524 19:41:21.924760    2140 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 19:41:23.774132    2140 command_runner.go:130] > [preflight] Running pre-flight checks
	I0524 19:41:23.774132    2140 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0524 19:41:23.774269    2140 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0524 19:41:23.774269    2140 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 19:41:23.774269    2140 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 19:41:23.774269    2140 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0524 19:41:23.774269    2140 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0524 19:41:23.774269    2140 command_runner.go:130] > This node has joined the cluster:
	I0524 19:41:23.774351    2140 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0524 19:41:23.774351    2140 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0524 19:41:23.774351    2140 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0524 19:41:23.774388    2140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xlf9lr.8rge92tqm3xr8ksd --discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-237000-m02": (3.0442978s)
	I0524 19:41:23.774430    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0524 19:41:24.025183    2140 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0524 19:41:24.271182    2140 start.go:303] JoinCluster complete in 10.8160963s
	I0524 19:41:24.271262    2140 cni.go:84] Creating CNI manager for ""
	I0524 19:41:24.271328    2140 cni.go:136] 3 nodes found, recommending kindnet
	I0524 19:41:24.282614    2140 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0524 19:41:24.292575    2140 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0524 19:41:24.292575    2140 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0524 19:41:24.292575    2140 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0524 19:41:24.292575    2140 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0524 19:41:24.292575    2140 command_runner.go:130] > Access: 2023-05-24 19:39:02.222529100 +0000
	I0524 19:41:24.292575    2140 command_runner.go:130] > Modify: 2023-05-20 04:10:39.000000000 +0000
	I0524 19:41:24.292575    2140 command_runner.go:130] > Change: 2023-05-24 19:38:51.773000000 +0000
	I0524 19:41:24.292575    2140 command_runner.go:130] >  Birth: -
	I0524 19:41:24.292575    2140 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0524 19:41:24.292575    2140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0524 19:41:24.337569    2140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0524 19:41:24.853348    2140 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0524 19:41:24.853419    2140 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0524 19:41:24.853419    2140 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0524 19:41:24.853419    2140 command_runner.go:130] > daemonset.apps/kindnet configured
	I0524 19:41:24.854183    2140 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:41:24.855185    2140 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.143.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:41:24.856287    2140 round_trippers.go:463] GET https://172.27.143.236:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0524 19:41:24.856287    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:24.856287    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:24.856353    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:24.865797    2140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0524 19:41:24.865797    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:24.865797    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:24.865797    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:24.865797    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:24.865797    2140 round_trippers.go:580]     Content-Length: 292
	I0524 19:41:24.865797    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:24 GMT
	I0524 19:41:24.865797    2140 round_trippers.go:580]     Audit-Id: 109b5ce6-c11f-4f71-ae01-5dad764e9208
	I0524 19:41:24.865797    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:24.865797    2140 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9016559d-3c59-4f76-8961-1b5665cb8836","resourceVersion":"1294","creationTimestamp":"2023-05-24T19:27:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0524 19:41:24.866473    2140 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-237000" context rescaled to 1 replicas
	I0524 19:41:24.866513    2140 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.27.142.80 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0524 19:41:24.869349    2140 out.go:177] * Verifying Kubernetes components...
	I0524 19:41:24.882062    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:41:24.915065    2140 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:41:24.916050    2140 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.143.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:41:24.916050    2140 node_ready.go:35] waiting up to 6m0s for node "multinode-237000-m02" to be "Ready" ...
	I0524 19:41:24.917089    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:24.917089    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:24.917089    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:24.917089    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:24.922055    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:24.922595    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:24.922595    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:24.922595    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:24.922595    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:24.922595    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:24.922595    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:24 GMT
	I0524 19:41:24.922595    2140 round_trippers.go:580]     Audit-Id: 1915442a-82d5-4135-9376-4e12a63b0efa
	I0524 19:41:24.923061    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1406","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4157 chars]
	I0524 19:41:25.430900    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:25.430968    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:25.430968    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:25.430968    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:25.434471    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:41:25.434471    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:25.435518    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:25.435518    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:25.435518    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:25 GMT
	I0524 19:41:25.435518    2140 round_trippers.go:580]     Audit-Id: 79148a3a-f7e9-4243-8460-25dbd2150b08
	I0524 19:41:25.435518    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:25.435518    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:25.435518    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1406","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4157 chars]
	I0524 19:41:25.936352    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:25.936419    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:25.936419    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:25.936419    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:25.940825    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:25.940825    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:25.940825    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:25.940825    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:25.940825    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:25.940825    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:25 GMT
	I0524 19:41:25.940825    2140 round_trippers.go:580]     Audit-Id: 9e627c71-f904-41f8-b59a-b5824c875288
	I0524 19:41:25.940825    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:25.940825    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1406","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4157 chars]
	I0524 19:41:26.435678    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:26.435678    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:26.435752    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:26.435752    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:26.444535    2140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:41:26.444535    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:26.444535    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:26.444535    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:26.444535    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:26.444535    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:26.444535    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:26 GMT
	I0524 19:41:26.444535    2140 round_trippers.go:580]     Audit-Id: d0736bc2-9ccb-47cb-930a-3e431177ede3
	I0524 19:41:26.444535    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1406","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4157 chars]
	I0524 19:41:26.936200    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:26.936380    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:26.936380    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:26.936380    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:26.940638    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:26.940638    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:26.940638    2140 round_trippers.go:580]     Audit-Id: ec3630d6-09cc-4da0-a405-fc0966ad5aaf
	I0524 19:41:26.940949    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:26.940949    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:26.940949    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:26.940949    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:26.940949    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:26 GMT
	I0524 19:41:26.942238    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:26.942784    2140 node_ready.go:58] node "multinode-237000-m02" has status "Ready":"False"
	I0524 19:41:27.425408    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:27.425623    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:27.425623    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:27.425623    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:27.433166    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:41:27.433166    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:27.433166    2140 round_trippers.go:580]     Audit-Id: 3f37ba6b-8b6a-42c4-9bba-58cf4b90f786
	I0524 19:41:27.433166    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:27.433166    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:27.433166    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:27.433166    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:27.433166    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:27 GMT
	I0524 19:41:27.433615    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:27.927015    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:27.927079    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:27.927079    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:27.927140    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:27.931553    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:27.931553    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:27.931553    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:27.931553    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:27 GMT
	I0524 19:41:27.931553    2140 round_trippers.go:580]     Audit-Id: 342546ea-77f1-4c2f-81ea-96ba7be11233
	I0524 19:41:27.931553    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:27.931553    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:27.931553    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:27.932246    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:28.429909    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:28.429909    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:28.429909    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:28.429909    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:28.434155    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:28.434155    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:28.434155    2140 round_trippers.go:580]     Audit-Id: d663c750-9984-4c9e-a3a7-b7ad8210a0e9
	I0524 19:41:28.434155    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:28.434155    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:28.434155    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:28.434155    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:28.434155    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:28 GMT
	I0524 19:41:28.434983    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:28.937441    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:28.937441    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:28.937441    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:28.937720    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:28.941862    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:28.941862    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:28.941862    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:28.941862    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:28.941862    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:28 GMT
	I0524 19:41:28.941862    2140 round_trippers.go:580]     Audit-Id: c4a36463-255b-4eee-aea5-dbaaa59b797b
	I0524 19:41:28.941862    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:28.941862    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:28.942812    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:28.942812    2140 node_ready.go:58] node "multinode-237000-m02" has status "Ready":"False"
	I0524 19:41:29.426089    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:29.426089    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:29.426089    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:29.426089    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:29.430753    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:29.430877    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:29.430877    2140 round_trippers.go:580]     Audit-Id: 5adbd76d-cc02-4a02-bc6f-3c95b6fcf8d2
	I0524 19:41:29.430877    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:29.430877    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:29.430945    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:29.430945    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:29.430977    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:29 GMT
	I0524 19:41:29.431135    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:29.932432    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:29.932432    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:29.932432    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:29.932432    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:29.941849    2140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0524 19:41:29.941849    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:29.941849    2140 round_trippers.go:580]     Audit-Id: cd40e646-8461-4aee-bbde-122b669f7bf7
	I0524 19:41:29.941849    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:29.942079    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:29.942079    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:29.942111    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:29.942111    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:29 GMT
	I0524 19:41:29.942315    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:30.424129    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:30.424384    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:30.424449    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:30.424449    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:30.428949    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:30.429227    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:30.429227    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:30 GMT
	I0524 19:41:30.429227    2140 round_trippers.go:580]     Audit-Id: 630f31eb-c1cc-4f26-a74a-30258cf0820e
	I0524 19:41:30.429296    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:30.429296    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:30.429296    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:30.429296    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:30.429533    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:30.930086    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:30.930086    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:30.930086    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:30.930086    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:30.933934    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:41:30.933934    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:30.933934    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:30 GMT
	I0524 19:41:30.933934    2140 round_trippers.go:580]     Audit-Id: 108196be-b153-4a7d-b0e0-c1450bc7e8f7
	I0524 19:41:30.933934    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:30.934585    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:30.934585    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:30.934585    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:30.934896    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:31.432472    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:31.432534    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:31.432534    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:31.432534    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:31.436666    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:31.436768    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:31.436768    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:31.436830    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:31 GMT
	I0524 19:41:31.436830    2140 round_trippers.go:580]     Audit-Id: 356993b5-3ebb-4368-bff8-13eaf1915b2e
	I0524 19:41:31.436830    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:31.436830    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:31.436830    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:31.436830    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:31.437496    2140 node_ready.go:58] node "multinode-237000-m02" has status "Ready":"False"
	I0524 19:41:31.934976    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:31.934976    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:31.934976    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:31.934976    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:31.939128    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:41:31.939128    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:31.939128    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:31.939128    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:31 GMT
	I0524 19:41:31.939207    2140 round_trippers.go:580]     Audit-Id: b7d49659-1bb7-4886-aa32-0c9b11078002
	I0524 19:41:31.939207    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:31.939207    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:31.939207    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:31.939543    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1415","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4266 chars]
	I0524 19:41:32.438874    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:32.438874    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.438874    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.438874    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.455604    2140 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0524 19:41:32.456121    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.456121    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.456121    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.456204    2140 round_trippers.go:580]     Audit-Id: aa97627d-3f7f-451d-bfeb-512e7cba0490
	I0524 19:41:32.456204    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.456204    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.456204    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.456457    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1432","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4526 chars]
	I0524 19:41:32.456922    2140 node_ready.go:49] node "multinode-237000-m02" has status "Ready":"True"
	I0524 19:41:32.456996    2140 node_ready.go:38] duration metric: took 7.5409488s waiting for node "multinode-237000-m02" to be "Ready" ...
	I0524 19:41:32.456996    2140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:41:32.457070    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods
	I0524 19:41:32.457145    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.457145    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.457145    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.469559    2140 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0524 19:41:32.469559    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.469559    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.469559    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.469559    2140 round_trippers.go:580]     Audit-Id: 75c12680-bd13-401a-9996-bfd53497a906
	I0524 19:41:32.469559    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.469559    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.469559    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.471521    2140 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1433"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1290","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83422 chars]
	I0524 19:41:32.475526    2140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:32.475526    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:41:32.475526    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.475526    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.475526    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.487518    2140 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0524 19:41:32.487518    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.487518    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.487518    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.487518    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.487518    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.487518    2140 round_trippers.go:580]     Audit-Id: cb44e443-532c-42fe-9e61-4de57d5f1158
	I0524 19:41:32.487518    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.487518    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1290","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0524 19:41:32.487518    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:41:32.487518    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.487518    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.488561    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.491510    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:41:32.491510    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.491510    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.491510    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.491510    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.491510    2140 round_trippers.go:580]     Audit-Id: f47c6bbf-273f-4bd6-a44c-bc6d20306e7c
	I0524 19:41:32.491510    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.491510    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.491510    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:41:32.492521    2140 pod_ready.go:92] pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace has status "Ready":"True"
	I0524 19:41:32.492521    2140 pod_ready.go:81] duration metric: took 16.995ms waiting for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:32.492521    2140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:32.492521    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-237000
	I0524 19:41:32.492521    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.492521    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.492521    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.497529    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:41:32.497529    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.497529    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.497529    2140 round_trippers.go:580]     Audit-Id: 1c266add-dfcb-468e-b8ca-946c4c151cff
	I0524 19:41:32.497529    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.497529    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.497529    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.497529    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.498520    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-237000","namespace":"kube-system","uid":"4b73c6ae-c8c9-444c-a5b5-a4bb2e724689","resourceVersion":"1274","creationTimestamp":"2023-05-24T19:39:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.143.236:2379","kubernetes.io/config.hash":"a462e4d9e600aa9f863cde3f240bd69a","kubernetes.io/config.mirror":"a462e4d9e600aa9f863cde3f240bd69a","kubernetes.io/config.seen":"2023-05-24T19:39:40.956259078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:39:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0524 19:41:32.498520    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:41:32.498520    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.498520    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.498520    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.503518    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:32.503518    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.504508    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.504508    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.504508    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.504508    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.504508    2140 round_trippers.go:580]     Audit-Id: 28f42646-2ccd-4434-9a41-e7d82e38a77a
	I0524 19:41:32.504508    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.504508    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:41:32.504508    2140 pod_ready.go:92] pod "etcd-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:41:32.505543    2140 pod_ready.go:81] duration metric: took 13.0223ms waiting for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:32.505543    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:32.505543    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-237000
	I0524 19:41:32.505543    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.505543    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.505543    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.508520    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:41:32.508520    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.508520    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.508520    2140 round_trippers.go:580]     Audit-Id: 8a013ac2-6ecd-4547-99df-4da858b3c698
	I0524 19:41:32.508520    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.508520    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.508520    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.508520    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.508520    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-237000","namespace":"kube-system","uid":"46721249-af81-40ba-b756-6f9def350d07","resourceVersion":"1248","creationTimestamp":"2023-05-24T19:39:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.143.236:8443","kubernetes.io/config.hash":"4278bfa912c61c7340a8d49488981a6d","kubernetes.io/config.mirror":"4278bfa912c61c7340a8d49488981a6d","kubernetes.io/config.seen":"2023-05-24T19:39:40.956261577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:39:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0524 19:41:32.509530    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:41:32.509530    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.509530    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.509530    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.512516    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:41:32.512516    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.512516    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.512516    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.512516    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.512516    2140 round_trippers.go:580]     Audit-Id: bb79d5dd-0d8a-4fec-949f-9159ae69e722
	I0524 19:41:32.512516    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.512516    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.512516    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:41:32.512516    2140 pod_ready.go:92] pod "kube-apiserver-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:41:32.512516    2140 pod_ready.go:81] duration metric: took 6.9724ms waiting for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:32.512516    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:32.512516    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-237000
	I0524 19:41:32.512516    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.512516    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.512516    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.521523    2140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0524 19:41:32.521834    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.521834    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.521834    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.521834    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.521834    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.521914    2140 round_trippers.go:580]     Audit-Id: 6ff3daa5-7bc1-4014-ac4e-1568a229342d
	I0524 19:41:32.521914    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.522412    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-237000","namespace":"kube-system","uid":"1ff7b570-afe4-4076-989f-d0377d04f9d5","resourceVersion":"1273","creationTimestamp":"2023-05-24T19:27:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.mirror":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.seen":"2023-05-24T19:27:00.264375644Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0524 19:41:32.522712    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:41:32.522712    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.522712    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.522712    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.525348    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:41:32.525666    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.525666    2140 round_trippers.go:580]     Audit-Id: a6f1a3a2-3aa6-405f-b108-68902c543899
	I0524 19:41:32.525666    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.525666    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.525666    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.525666    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.525666    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.526774    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:41:32.527364    2140 pod_ready.go:92] pod "kube-controller-manager-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:41:32.527364    2140 pod_ready.go:81] duration metric: took 14.8486ms waiting for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:32.527364    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4qmlh" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:32.641270    2140 request.go:628] Waited for 113.7438ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qmlh
	I0524 19:41:32.641472    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qmlh
	I0524 19:41:32.641613    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.641613    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.641613    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.645947    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:32.646503    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.646503    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.646551    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.646551    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.646583    2140 round_trippers.go:580]     Audit-Id: 2e53a6cd-213e-4578-9055-6f352eb82a6a
	I0524 19:41:32.646583    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.646583    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.646583    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4qmlh","generateName":"kube-proxy-","namespace":"kube-system","uid":"3c277e06-12a4-451c-ad5b-15cc2bd169ad","resourceVersion":"1324","creationTimestamp":"2023-05-24T19:32:20Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:32:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5974 chars]
	I0524 19:41:32.844664    2140 request.go:628] Waited for 197.2516ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:41:32.845047    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:41:32.845114    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:32.845114    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:32.845250    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:32.849694    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:32.850220    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:32.850220    2140 round_trippers.go:580]     Audit-Id: 2e287220-49e6-4891-8f62-3ef6102bab10
	I0524 19:41:32.850220    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:32.850220    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:32.850220    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:32.850220    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:32.850352    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:32 GMT
	I0524 19:41:32.850544    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"05dd373e-a994-4789-af16-d10bfd472a98","resourceVersion":"1345","creationTimestamp":"2023-05-24T19:37:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:37:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4933 chars]
	I0524 19:41:32.851045    2140 pod_ready.go:97] node "multinode-237000-m03" hosting pod "kube-proxy-4qmlh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000-m03" has status "Ready":"Unknown"
	I0524 19:41:32.851110    2140 pod_ready.go:81] duration metric: took 323.7453ms waiting for pod "kube-proxy-4qmlh" in "kube-system" namespace to be "Ready" ...
	E0524 19:41:32.851110    2140 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-237000-m03" hosting pod "kube-proxy-4qmlh" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-237000-m03" has status "Ready":"Unknown"
	I0524 19:41:32.851110    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:33.051692    2140 request.go:628] Waited for 200.5173ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:41:33.051692    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:41:33.051692    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:33.051692    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:33.051692    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:33.055811    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:33.055811    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:33.055811    2140 round_trippers.go:580]     Audit-Id: 24903734-80ad-41ed-9e89-f56859e6d782
	I0524 19:41:33.055811    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:33.055811    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:33.055811    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:33.055811    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:33.055811    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:33 GMT
	I0524 19:41:33.055811    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r6f94","generateName":"kube-proxy-","namespace":"kube-system","uid":"90a232cf-33b3-4e3b-82bf-9050d39109d1","resourceVersion":"1243","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I0524 19:41:33.240872    2140 request.go:628] Waited for 183.8883ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:41:33.241013    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:41:33.241013    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:33.241013    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:33.241013    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:33.246334    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:41:33.246334    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:33.246448    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:33 GMT
	I0524 19:41:33.246448    2140 round_trippers.go:580]     Audit-Id: 7d9e83b7-ab96-430f-9408-fb4f182ee20f
	I0524 19:41:33.246448    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:33.246448    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:33.246448    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:33.246448    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:33.246533    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:41:33.247115    2140 pod_ready.go:92] pod "kube-proxy-r6f94" in "kube-system" namespace has status "Ready":"True"
	I0524 19:41:33.247231    2140 pod_ready.go:81] duration metric: took 396.0056ms waiting for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:33.247231    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zglzj" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:33.441058    2140 request.go:628] Waited for 193.4516ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zglzj
	I0524 19:41:33.441058    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zglzj
	I0524 19:41:33.441058    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:33.441058    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:33.441371    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:33.445596    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:33.445596    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:33.445596    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:33.445596    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:33 GMT
	I0524 19:41:33.445596    2140 round_trippers.go:580]     Audit-Id: c49498ec-de39-4d06-8eec-d709cf37d4ab
	I0524 19:41:33.445596    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:33.445596    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:33.445596    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:33.446391    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zglzj","generateName":"kube-proxy-","namespace":"kube-system","uid":"af1fb911-5877-4bcc-92f4-5571f489122c","resourceVersion":"1419","creationTimestamp":"2023-05-24T19:29:22Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5745 chars]
	I0524 19:41:33.643661    2140 request.go:628] Waited for 196.8671ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:33.643754    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:41:33.643754    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:33.643754    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:33.643835    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:33.651059    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:41:33.651059    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:33.651683    2140 round_trippers.go:580]     Audit-Id: ceed74c5-307c-4b31-bae0-b27869168db6
	I0524 19:41:33.651683    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:33.651683    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:33.651683    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:33.651683    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:33.651737    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:33 GMT
	I0524 19:41:33.651973    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1433","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4465 chars]
	I0524 19:41:33.652076    2140 pod_ready.go:92] pod "kube-proxy-zglzj" in "kube-system" namespace has status "Ready":"True"
	I0524 19:41:33.652076    2140 pod_ready.go:81] duration metric: took 404.8457ms waiting for pod "kube-proxy-zglzj" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:33.652076    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:33.847108    2140 request.go:628] Waited for 194.8305ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:41:33.847313    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:41:33.847313    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:33.847313    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:33.847313    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:33.850842    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:41:33.850842    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:33.850842    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:33.850842    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:33.850842    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:33 GMT
	I0524 19:41:33.850842    2140 round_trippers.go:580]     Audit-Id: 446caca1-d019-4042-b406-9300b800b301
	I0524 19:41:33.850842    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:33.850842    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:33.852163    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-237000","namespace":"kube-system","uid":"a55c419f-1b04-4895-9fd5-02dd67cd888f","resourceVersion":"1252","creationTimestamp":"2023-05-24T19:27:12Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.mirror":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.seen":"2023-05-24T19:27:12.143961333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0524 19:41:34.047489    2140 request.go:628] Waited for 194.5744ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:41:34.047895    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:41:34.047895    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:34.047895    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:34.047961    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:34.055301    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:41:34.055397    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:34.055397    2140 round_trippers.go:580]     Audit-Id: 0bbe8b0a-8011-4b33-b092-836bafcc95d4
	I0524 19:41:34.055441    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:34.055441    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:34.055441    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:34.055441    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:34.055441    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:34 GMT
	I0524 19:41:34.055572    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:41:34.056100    2140 pod_ready.go:92] pod "kube-scheduler-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:41:34.056266    2140 pod_ready.go:81] duration metric: took 404.1897ms waiting for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:41:34.056299    2140 pod_ready.go:38] duration metric: took 1.5993033s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:41:34.056348    2140 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 19:41:34.066791    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:41:34.093376    2140 system_svc.go:56] duration metric: took 37.0286ms WaitForService to wait for kubelet.
	I0524 19:41:34.093376    2140 kubeadm.go:581] duration metric: took 9.2267272s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 19:41:34.093376    2140 node_conditions.go:102] verifying NodePressure condition ...
	I0524 19:41:34.250383    2140 request.go:628] Waited for 156.7695ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes
	I0524 19:41:34.251447    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes
	I0524 19:41:34.251881    2140 round_trippers.go:469] Request Headers:
	I0524 19:41:34.251881    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:41:34.251881    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:41:34.256706    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:41:34.256706    2140 round_trippers.go:577] Response Headers:
	I0524 19:41:34.256706    2140 round_trippers.go:580]     Audit-Id: a1b04521-c6a6-46fa-a1b0-e1445728dd29
	I0524 19:41:34.257076    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:41:34.257076    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:41:34.257076    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:41:34.257076    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:41:34.257076    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:41:34 GMT
	I0524 19:41:34.257711    2140 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1437"},"items":[{"metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16676 chars]
	I0524 19:41:34.258573    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:41:34.258573    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:41:34.258573    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:41:34.258573    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:41:34.258573    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:41:34.258573    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:41:34.258677    2140 node_conditions.go:105] duration metric: took 165.3007ms to run NodePressure ...
	I0524 19:41:34.258677    2140 start.go:228] waiting for startup goroutines ...
	I0524 19:41:34.258677    2140 start.go:242] writing updated cluster config ...
	I0524 19:41:34.271383    2140 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:41:34.271876    2140 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:41:34.278309    2140 out.go:177] * Starting worker node multinode-237000-m03 in cluster multinode-237000
	I0524 19:41:34.282389    2140 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:41:34.282389    2140 cache.go:57] Caching tarball of preloaded images
	I0524 19:41:34.282925    2140 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0524 19:41:34.283216    2140 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 19:41:34.283377    2140 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:41:34.285607    2140 cache.go:195] Successfully downloaded all kic artifacts
	I0524 19:41:34.285607    2140 start.go:364] acquiring machines lock for multinode-237000-m03: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 19:41:34.285607    2140 start.go:368] acquired machines lock for "multinode-237000-m03" in 0s
	I0524 19:41:34.286150    2140 start.go:96] Skipping create...Using existing machine configuration
	I0524 19:41:34.286181    2140 fix.go:55] fixHost starting: m03
	I0524 19:41:34.286322    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:41:35.044821    2140 main.go:141] libmachine: [stdout =====>] : Off
	
	I0524 19:41:35.044821    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:35.044821    2140 fix.go:103] recreateIfNeeded on multinode-237000-m03: state=Stopped err=<nil>
	W0524 19:41:35.044821    2140 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 19:41:35.050068    2140 out.go:177] * Restarting existing hyperv VM for "multinode-237000-m03" ...
	I0524 19:41:35.053296    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-237000-m03
	I0524 19:41:36.770681    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:41:36.770681    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:36.770681    2140 main.go:141] libmachine: Waiting for host to start...
	I0524 19:41:36.770972    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:41:37.562515    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:37.562515    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:37.562515    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:38.717551    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:41:38.717551    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:39.723180    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:41:40.486268    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:40.486268    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:40.486268    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:41.575416    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:41:41.575416    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:42.577751    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:41:43.371033    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:43.371277    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:43.371390    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:44.435503    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:41:44.435503    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:45.441810    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:41:46.218049    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:46.218109    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:46.218180    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:47.318624    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:41:47.318624    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:48.319028    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:41:49.065468    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:49.065468    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:49.065468    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:50.160426    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:41:50.160497    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:51.175180    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:41:51.965979    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:51.966154    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:51.966154    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:53.045525    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:41:53.045826    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:54.049753    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:41:54.854109    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:54.854406    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:54.854460    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:55.948659    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:41:55.948659    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:56.950859    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:41:57.711964    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:41:57.711964    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:57.711964    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:41:58.758719    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:41:58.758804    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:41:59.774459    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:00.548058    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:00.548315    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:00.548315    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:01.636335    2140 main.go:141] libmachine: [stdout =====>] : 
	I0524 19:42:01.636335    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:02.650112    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:03.490441    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:03.490441    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:03.490595    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:04.658839    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:04.659047    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:04.661652    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:05.503663    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:05.503886    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:05.504065    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:06.627767    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:06.627937    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:06.628058    2140 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000\config.json ...
	I0524 19:42:06.630528    2140 machine.go:88] provisioning docker machine ...
	I0524 19:42:06.630599    2140 buildroot.go:166] provisioning hostname "multinode-237000-m03"
	I0524 19:42:06.630599    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:07.421152    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:07.421152    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:07.421152    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:08.572575    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:08.572575    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:08.577613    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:42:08.578303    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.137.67 22 <nil> <nil>}
	I0524 19:42:08.578303    2140 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-237000-m03 && echo "multinode-237000-m03" | sudo tee /etc/hostname
	I0524 19:42:08.760126    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-237000-m03
	
	I0524 19:42:08.760203    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:09.551410    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:09.551410    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:09.551410    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:10.705505    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:10.705814    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:10.710122    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:42:10.710947    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.137.67 22 <nil> <nil>}
	I0524 19:42:10.710947    2140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-237000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-237000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-237000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 19:42:10.866944    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 19:42:10.866944    2140 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0524 19:42:10.866944    2140 buildroot.go:174] setting up certificates
	I0524 19:42:10.866944    2140 provision.go:83] configureAuth start
	I0524 19:42:10.866944    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:11.645465    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:11.645783    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:11.646001    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:12.754002    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:12.754070    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:12.754155    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:13.544396    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:13.544396    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:13.544639    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:14.632966    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:14.632966    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:14.633081    2140 provision.go:138] copyHostCerts
	I0524 19:42:14.633219    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem
	I0524 19:42:14.633219    2140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 19:42:14.633219    2140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 19:42:14.633769    2140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 19:42:14.634892    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem
	I0524 19:42:14.634947    2140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 19:42:14.634947    2140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 19:42:14.634947    2140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 19:42:14.636313    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem
	I0524 19:42:14.636313    2140 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 19:42:14.636843    2140 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 19:42:14.637160    2140 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 19:42:14.638211    2140 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-237000-m03 san=[172.27.137.67 172.27.137.67 localhost 127.0.0.1 minikube multinode-237000-m03]
	I0524 19:42:14.877910    2140 provision.go:172] copyRemoteCerts
	I0524 19:42:14.888029    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 19:42:14.888168    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:15.681985    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:15.682046    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:15.682046    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:16.783982    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:16.783982    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:16.784367    2140 sshutil.go:53] new ssh client: &{IP:172.27.137.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m03\id_rsa Username:docker}
	I0524 19:42:16.897766    2140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.0087239s)
	I0524 19:42:16.897766    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0524 19:42:16.898257    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0524 19:42:16.940719    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0524 19:42:16.941449    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 19:42:16.983402    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0524 19:42:16.983842    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 19:42:17.028197    2140 provision.go:86] duration metric: configureAuth took 6.1612557s
	I0524 19:42:17.028197    2140 buildroot.go:189] setting minikube options for container-runtime
	I0524 19:42:17.029083    2140 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:42:17.029083    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:17.810227    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:17.810405    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:17.810405    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:18.893018    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:18.893018    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:18.897519    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:42:18.898472    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.137.67 22 <nil> <nil>}
	I0524 19:42:18.898472    2140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 19:42:19.063535    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 19:42:19.063629    2140 buildroot.go:70] root file system type: tmpfs
	I0524 19:42:19.063891    2140 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 19:42:19.063936    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:19.840882    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:19.841286    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:19.841286    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:20.929785    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:20.930092    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:20.934986    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:42:20.935681    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.137.67 22 <nil> <nil>}
	I0524 19:42:20.935681    2140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.27.143.236"
	Environment="NO_PROXY=172.27.143.236,172.27.142.80"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 19:42:21.116494    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.27.143.236
	Environment=NO_PROXY=172.27.143.236,172.27.142.80
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 19:42:21.116571    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:21.873401    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:21.873427    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:21.873427    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:22.962942    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:22.963138    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:22.967359    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:42:22.968154    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.137.67 22 <nil> <nil>}
	I0524 19:42:22.968154    2140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 19:42:24.468201    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 19:42:24.468201    2140 machine.go:91] provisioned docker machine in 17.8376804s
	I0524 19:42:24.468329    2140 start.go:300] post-start starting for "multinode-237000-m03" (driver="hyperv")
	I0524 19:42:24.468329    2140 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 19:42:24.479783    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 19:42:24.479783    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:25.225487    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:25.225487    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:25.225895    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:26.345857    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:26.345928    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:26.346453    2140 sshutil.go:53] new ssh client: &{IP:172.27.137.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m03\id_rsa Username:docker}
	I0524 19:42:26.459977    2140 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.9797341s)
	I0524 19:42:26.469166    2140 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 19:42:26.476353    2140 command_runner.go:130] > NAME=Buildroot
	I0524 19:42:26.476420    2140 command_runner.go:130] > VERSION=2021.02.12-1-g419828a-dirty
	I0524 19:42:26.476420    2140 command_runner.go:130] > ID=buildroot
	I0524 19:42:26.476420    2140 command_runner.go:130] > VERSION_ID=2021.02.12
	I0524 19:42:26.476420    2140 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0524 19:42:26.476522    2140 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 19:42:26.476522    2140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 19:42:26.476909    2140 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 19:42:26.477852    2140 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 19:42:26.477938    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> /etc/ssl/certs/65602.pem
	I0524 19:42:26.487866    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 19:42:26.506274    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 19:42:26.548484    2140 start.go:303] post-start completed in 2.0801563s
	I0524 19:42:26.548484    2140 fix.go:57] fixHost completed within 52.262324s
	I0524 19:42:26.548484    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:27.303082    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:27.303272    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:27.303350    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:28.395683    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:28.395916    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:28.400169    2140 main.go:141] libmachine: Using SSH client type: native
	I0524 19:42:28.400790    2140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.137.67 22 <nil> <nil>}
	I0524 19:42:28.400790    2140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 19:42:28.558036    2140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684957348.555871037
	
	I0524 19:42:28.558036    2140 fix.go:207] guest clock: 1684957348.555871037
	I0524 19:42:28.558036    2140 fix.go:220] Guest: 2023-05-24 19:42:28.555871037 +0000 UTC Remote: 2023-05-24 19:42:26.5484848 +0000 UTC m=+235.506701801 (delta=2.007386237s)
	I0524 19:42:28.558036    2140 fix.go:191] guest clock delta is within tolerance: 2.007386237s
	I0524 19:42:28.558036    2140 start.go:83] releasing machines lock for "multinode-237000-m03", held for 54.2720043s
	I0524 19:42:28.558036    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:29.330083    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:29.330339    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:29.330339    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:30.424654    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:30.424654    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:30.427162    2140 out.go:177] * Found network options:
	I0524 19:42:30.429926    2140 out.go:177]   - NO_PROXY=172.27.143.236,172.27.142.80
	W0524 19:42:30.432088    2140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0524 19:42:30.432088    2140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0524 19:42:30.434295    2140 out.go:177]   - no_proxy=172.27.143.236,172.27.142.80
	W0524 19:42:30.436371    2140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0524 19:42:30.436371    2140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0524 19:42:30.438015    2140 proxy.go:119] fail to check proxy env: Error ip not in block
	W0524 19:42:30.438015    2140 proxy.go:119] fail to check proxy env: Error ip not in block
	I0524 19:42:30.440171    2140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 19:42:30.440171    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:30.448177    2140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0524 19:42:30.448177    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:42:31.251346    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:31.251478    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:31.251478    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:31.251478    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:31.251576    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:31.251652    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m03 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:32.430092    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:32.430253    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:32.430659    2140 sshutil.go:53] new ssh client: &{IP:172.27.137.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m03\id_rsa Username:docker}
	I0524 19:42:32.461273    2140 main.go:141] libmachine: [stdout =====>] : 172.27.137.67
	
	I0524 19:42:32.461273    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:32.461697    2140 sshutil.go:53] new ssh client: &{IP:172.27.137.67 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m03\id_rsa Username:docker}
	I0524 19:42:32.546619    2140 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0524 19:42:32.546880    2140 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (2.0985661s)
	W0524 19:42:32.546880    2140 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 19:42:32.557137    2140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 19:42:32.627241    2140 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0524 19:42:32.627331    2140 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1871616s)
	I0524 19:42:32.627426    2140 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0524 19:42:32.627473    2140 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0524 19:42:32.627473    2140 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 19:42:32.635514    2140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 19:42:32.672767    2140 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.2
	I0524 19:42:32.672836    2140 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.2
	I0524 19:42:32.672836    2140 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.2
	I0524 19:42:32.672836    2140 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.2
	I0524 19:42:32.672836    2140 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0524 19:42:32.672836    2140 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0524 19:42:32.672836    2140 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0524 19:42:32.672836    2140 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0524 19:42:32.672836    2140 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0524 19:42:32.672955    2140 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 19:42:32.672978    2140 docker.go:563] Images already preloaded, skipping extraction
	I0524 19:42:32.673022    2140 start.go:481] detecting cgroup driver to use...
	I0524 19:42:32.673156    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:42:32.706440    2140 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0524 19:42:32.716530    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 19:42:32.744134    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 19:42:32.762062    2140 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 19:42:32.771664    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 19:42:32.799619    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:42:32.828434    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 19:42:32.854430    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 19:42:32.881473    2140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 19:42:32.912428    2140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 19:42:32.938435    2140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 19:42:32.955124    2140 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0524 19:42:32.965084    2140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 19:42:33.001200    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:42:33.189440    2140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 19:42:33.218022    2140 start.go:481] detecting cgroup driver to use...
	I0524 19:42:33.228508    2140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 19:42:33.247256    2140 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0524 19:42:33.247256    2140 command_runner.go:130] > [Unit]
	I0524 19:42:33.247256    2140 command_runner.go:130] > Description=Docker Application Container Engine
	I0524 19:42:33.247256    2140 command_runner.go:130] > Documentation=https://docs.docker.com
	I0524 19:42:33.247256    2140 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0524 19:42:33.247256    2140 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0524 19:42:33.247256    2140 command_runner.go:130] > StartLimitBurst=3
	I0524 19:42:33.247256    2140 command_runner.go:130] > StartLimitIntervalSec=60
	I0524 19:42:33.247256    2140 command_runner.go:130] > [Service]
	I0524 19:42:33.247256    2140 command_runner.go:130] > Type=notify
	I0524 19:42:33.247256    2140 command_runner.go:130] > Restart=on-failure
	I0524 19:42:33.247431    2140 command_runner.go:130] > Environment=NO_PROXY=172.27.143.236
	I0524 19:42:33.247431    2140 command_runner.go:130] > Environment=NO_PROXY=172.27.143.236,172.27.142.80
	I0524 19:42:33.247431    2140 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0524 19:42:33.247483    2140 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0524 19:42:33.247483    2140 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0524 19:42:33.247483    2140 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0524 19:42:33.247537    2140 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0524 19:42:33.247537    2140 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0524 19:42:33.247585    2140 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0524 19:42:33.247585    2140 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0524 19:42:33.247620    2140 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0524 19:42:33.247620    2140 command_runner.go:130] > ExecStart=
	I0524 19:42:33.247659    2140 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0524 19:42:33.247695    2140 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0524 19:42:33.247733    2140 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0524 19:42:33.247733    2140 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0524 19:42:33.247774    2140 command_runner.go:130] > LimitNOFILE=infinity
	I0524 19:42:33.247774    2140 command_runner.go:130] > LimitNPROC=infinity
	I0524 19:42:33.247774    2140 command_runner.go:130] > LimitCORE=infinity
	I0524 19:42:33.247805    2140 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0524 19:42:33.247805    2140 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0524 19:42:33.247851    2140 command_runner.go:130] > TasksMax=infinity
	I0524 19:42:33.247851    2140 command_runner.go:130] > TimeoutStartSec=0
	I0524 19:42:33.247851    2140 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0524 19:42:33.247889    2140 command_runner.go:130] > Delegate=yes
	I0524 19:42:33.247889    2140 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0524 19:42:33.247929    2140 command_runner.go:130] > KillMode=process
	I0524 19:42:33.247929    2140 command_runner.go:130] > [Install]
	I0524 19:42:33.247929    2140 command_runner.go:130] > WantedBy=multi-user.target
	I0524 19:42:33.258724    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 19:42:33.291922    2140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 19:42:33.332727    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 19:42:33.368213    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:42:33.398015    2140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 19:42:33.454813    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 19:42:33.475842    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 19:42:33.507509    2140 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0524 19:42:33.519024    2140 ssh_runner.go:195] Run: which cri-dockerd
	I0524 19:42:33.525223    2140 command_runner.go:130] > /usr/bin/cri-dockerd
	I0524 19:42:33.534865    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 19:42:33.550311    2140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 19:42:33.590441    2140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 19:42:33.771342    2140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 19:42:33.936692    2140 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 19:42:33.936742    2140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 19:42:33.977719    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:42:34.170763    2140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 19:42:35.857336    2140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6865739s)
	I0524 19:42:35.868426    2140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 19:42:36.050749    2140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 19:42:36.238974    2140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 19:42:36.440310    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:42:36.629297    2140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 19:42:36.669375    2140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 19:42:36.862710    2140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 19:42:36.985658    2140 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 19:42:36.995537    2140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 19:42:37.006133    2140 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0524 19:42:37.006198    2140 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0524 19:42:37.006198    2140 command_runner.go:130] > Device: 16h/22d	Inode: 888         Links: 1
	I0524 19:42:37.006198    2140 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0524 19:42:37.006198    2140 command_runner.go:130] > Access: 2023-05-24 19:42:36.887102423 +0000
	I0524 19:42:37.006198    2140 command_runner.go:130] > Modify: 2023-05-24 19:42:36.887102423 +0000
	I0524 19:42:37.006198    2140 command_runner.go:130] > Change: 2023-05-24 19:42:36.891102305 +0000
	I0524 19:42:37.006285    2140 command_runner.go:130] >  Birth: -
	I0524 19:42:37.006285    2140 start.go:549] Will wait 60s for crictl version
	I0524 19:42:37.022088    2140 ssh_runner.go:195] Run: which crictl
	I0524 19:42:37.027563    2140 command_runner.go:130] > /usr/bin/crictl
	I0524 19:42:37.042783    2140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 19:42:37.104479    2140 command_runner.go:130] > Version:  0.1.0
	I0524 19:42:37.104606    2140 command_runner.go:130] > RuntimeName:  docker
	I0524 19:42:37.104606    2140 command_runner.go:130] > RuntimeVersion:  20.10.23
	I0524 19:42:37.104606    2140 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0524 19:42:37.106603    2140 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 19:42:37.114575    2140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 19:42:37.158199    2140 command_runner.go:130] > 20.10.23
	I0524 19:42:37.165875    2140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 19:42:37.213896    2140 command_runner.go:130] > 20.10.23
	I0524 19:42:37.224952    2140 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 19:42:37.228003    2140 out.go:177]   - env NO_PROXY=172.27.143.236
	I0524 19:42:37.232485    2140 out.go:177]   - env NO_PROXY=172.27.143.236,172.27.142.80
	I0524 19:42:37.234987    2140 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 19:42:37.239046    2140 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 19:42:37.239670    2140 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 19:42:37.239724    2140 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 19:42:37.239724    2140 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 19:42:37.243106    2140 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 19:42:37.243106    2140 ip.go:210] interface addr: 172.27.128.1/20
	I0524 19:42:37.253655    2140 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 19:42:37.260642    2140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:42:37.283317    2140 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\multinode-237000 for IP: 172.27.137.67
	I0524 19:42:37.283391    2140 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 19:42:37.284189    2140 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 19:42:37.284605    2140 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 19:42:37.284799    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0524 19:42:37.284865    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0524 19:42:37.284865    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0524 19:42:37.284865    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0524 19:42:37.285779    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 19:42:37.286181    2140 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 19:42:37.286254    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 19:42:37.286361    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 19:42:37.286361    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 19:42:37.286949    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 19:42:37.287103    2140 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 19:42:37.287626    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> /usr/share/ca-certificates/65602.pem
	I0524 19:42:37.287816    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:42:37.287875    2140 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem -> /usr/share/ca-certificates/6560.pem
	I0524 19:42:37.288644    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 19:42:37.337163    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 19:42:37.383444    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 19:42:37.431182    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 19:42:37.476704    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 19:42:37.525232    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 19:42:37.570265    2140 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 19:42:37.628548    2140 ssh_runner.go:195] Run: openssl version
	I0524 19:42:37.636171    2140 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0524 19:42:37.646793    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 19:42:37.675972    2140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 19:42:37.685009    2140 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 19:42:37.685953    2140 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 19:42:37.695895    2140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 19:42:37.704659    2140 command_runner.go:130] > 3ec20f2e
	I0524 19:42:37.714315    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 19:42:37.742529    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 19:42:37.770056    2140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:42:37.780108    2140 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:42:37.780192    2140 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:42:37.789093    2140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 19:42:37.797639    2140 command_runner.go:130] > b5213941
	I0524 19:42:37.806889    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 19:42:37.833815    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 19:42:37.860409    2140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 19:42:37.867582    2140 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 19:42:37.867641    2140 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 19:42:37.878051    2140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 19:42:37.885887    2140 command_runner.go:130] > 51391683
	I0524 19:42:37.896635    2140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 19:42:37.925360    2140 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 19:42:37.931865    2140 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 19:42:37.931925    2140 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 19:42:37.939354    2140 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 19:42:37.987656    2140 command_runner.go:130] > cgroupfs
	I0524 19:42:37.988684    2140 cni.go:84] Creating CNI manager for ""
	I0524 19:42:37.988684    2140 cni.go:136] 3 nodes found, recommending kindnet
	I0524 19:42:37.988684    2140 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 19:42:37.988684    2140 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.137.67 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-237000 NodeName:multinode-237000-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.143.236"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.137.67 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 19:42:37.988684    2140 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.137.67
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-237000-m03"
	  kubeletExtraArgs:
	    node-ip: 172.27.137.67
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.143.236"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 19:42:37.988684    2140 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-237000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.137.67
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 19:42:37.997650    2140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 19:42:38.016394    2140 command_runner.go:130] > kubeadm
	I0524 19:42:38.016394    2140 command_runner.go:130] > kubectl
	I0524 19:42:38.016394    2140 command_runner.go:130] > kubelet
	I0524 19:42:38.016467    2140 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 19:42:38.024662    2140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0524 19:42:38.038829    2140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0524 19:42:38.065375    2140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 19:42:38.104239    2140 ssh_runner.go:195] Run: grep 172.27.143.236	control-plane.minikube.internal$ /etc/hosts
	I0524 19:42:38.110240    2140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.143.236	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 19:42:38.129134    2140 host.go:66] Checking if "multinode-237000" exists ...
	I0524 19:42:38.129923    2140 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:42:38.129852    2140 start.go:301] JoinCluster: &{Name:multinode-237000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.2 ClusterName:multinode-237000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.143.236 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.27.142.80 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.27.137.67 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 19:42:38.130078    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0524 19:42:38.130146    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:42:38.877122    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:38.877193    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:38.877193    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:39.980347    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:42:39.980347    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:39.980918    2140 sshutil.go:53] new ssh client: &{IP:172.27.143.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:42:40.185704    2140 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token x056o3.fatrm1fp60vth9ld --discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 
	I0524 19:42:40.185704    2140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm token create --print-join-command --ttl=0": (2.0556269s)
	I0524 19:42:40.185704    2140 start.go:314] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.27.137.67 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime: ControlPlane:false Worker:true}
	I0524 19:42:40.185704    2140 host.go:66] Checking if "multinode-237000" exists ...
	I0524 19:42:40.196804    2140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl drain multinode-237000-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0524 19:42:40.196804    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:42:40.973389    2140 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:42:40.973652    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:40.973652    2140 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:42:42.081318    2140 main.go:141] libmachine: [stdout =====>] : 172.27.143.236
	
	I0524 19:42:42.081375    2140 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:42:42.081375    2140 sshutil.go:53] new ssh client: &{IP:172.27.143.236 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:42:42.356594    2140 command_runner.go:130] > node/multinode-237000-m03 cordoned
	I0524 19:42:42.385257    2140 command_runner.go:130] > node/multinode-237000-m03 drained
	I0524 19:42:42.387812    2140 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0524 19:42:42.387898    2140 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-fzbwb, kube-system/kube-proxy-4qmlh
	I0524 19:42:42.387898    2140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl drain multinode-237000-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (2.1910951s)
	I0524 19:42:42.388000    2140 node.go:108] successfully drained node "m03"
	I0524 19:42:42.388682    2140 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:42:42.389540    2140 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.143.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:42:42.390299    2140 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0524 19:42:42.390299    2140 round_trippers.go:463] DELETE https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:42.390299    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:42.390299    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:42.390299    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:42.390299    2140 round_trippers.go:473]     Content-Type: application/json
	I0524 19:42:42.405335    2140 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0524 19:42:42.405335    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:42.405335    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:42 GMT
	I0524 19:42:42.405406    2140 round_trippers.go:580]     Audit-Id: 78fe943f-4bac-4cc5-9eaa-eb53eb7724f7
	I0524 19:42:42.405406    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:42.405406    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:42.405406    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:42.405406    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:42.405406    2140 round_trippers.go:580]     Content-Length: 171
	I0524 19:42:42.405406    2140 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-237000-m03","kind":"nodes","uid":"05dd373e-a994-4789-af16-d10bfd472a98"}}
	I0524 19:42:42.405406    2140 node.go:124] successfully deleted node "m03"
	I0524 19:42:42.405406    2140 start.go:318] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.27.137.67 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime: ControlPlane:false Worker:true}
	I0524 19:42:42.405406    2140 start.go:322] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.27.137.67 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime: ControlPlane:false Worker:true}
	I0524 19:42:42.405406    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x056o3.fatrm1fp60vth9ld --discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-237000-m03"
	I0524 19:42:42.792747    2140 command_runner.go:130] ! W0524 19:42:42.790123    1325 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0524 19:42:43.572582    2140 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 19:42:45.438738    2140 command_runner.go:130] > [preflight] Running pre-flight checks
	I0524 19:42:45.438822    2140 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0524 19:42:45.438822    2140 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0524 19:42:45.438822    2140 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 19:42:45.438936    2140 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 19:42:45.438936    2140 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0524 19:42:45.438936    2140 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0524 19:42:45.438972    2140 command_runner.go:130] > This node has joined the cluster:
	I0524 19:42:45.439005    2140 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0524 19:42:45.439029    2140 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0524 19:42:45.439029    2140 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0524 19:42:45.439053    2140 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token x056o3.fatrm1fp60vth9ld --discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-237000-m03": (3.0336488s)
	I0524 19:42:45.439113    2140 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0524 19:42:45.655746    2140 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0524 19:42:45.828782    2140 start.go:303] JoinCluster complete in 7.6989342s
	I0524 19:42:45.828782    2140 cni.go:84] Creating CNI manager for ""
	I0524 19:42:45.828782    2140 cni.go:136] 3 nodes found, recommending kindnet
	I0524 19:42:45.838343    2140 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0524 19:42:45.847328    2140 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0524 19:42:45.847328    2140 command_runner.go:130] >   Size: 2798344   	Blocks: 5472       IO Block: 4096   regular file
	I0524 19:42:45.847328    2140 command_runner.go:130] > Device: 11h/17d	Inode: 3542        Links: 1
	I0524 19:42:45.847328    2140 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0524 19:42:45.847328    2140 command_runner.go:130] > Access: 2023-05-24 19:39:02.222529100 +0000
	I0524 19:42:45.847328    2140 command_runner.go:130] > Modify: 2023-05-20 04:10:39.000000000 +0000
	I0524 19:42:45.847328    2140 command_runner.go:130] > Change: 2023-05-24 19:38:51.773000000 +0000
	I0524 19:42:45.847328    2140 command_runner.go:130] >  Birth: -
	I0524 19:42:45.847328    2140 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.27.2/kubectl ...
	I0524 19:42:45.847328    2140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0524 19:42:45.901984    2140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0524 19:42:46.390382    2140 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0524 19:42:46.390382    2140 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0524 19:42:46.390382    2140 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0524 19:42:46.390382    2140 command_runner.go:130] > daemonset.apps/kindnet configured
	I0524 19:42:46.391396    2140 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:42:46.392285    2140 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.143.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:42:46.393181    2140 round_trippers.go:463] GET https://172.27.143.236:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0524 19:42:46.393181    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:46.393181    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:46.393181    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:46.396848    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:42:46.396848    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:46.396848    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:46.396848    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:46.396848    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:46.396848    2140 round_trippers.go:580]     Content-Length: 292
	I0524 19:42:46.396848    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:46 GMT
	I0524 19:42:46.396848    2140 round_trippers.go:580]     Audit-Id: cbe7b2ea-4627-4328-ba25-989e1442329d
	I0524 19:42:46.396848    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:46.396848    2140 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"9016559d-3c59-4f76-8961-1b5665cb8836","resourceVersion":"1294","creationTimestamp":"2023-05-24T19:27:11Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0524 19:42:46.396848    2140 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-237000" context rescaled to 1 replicas
	I0524 19:42:46.396848    2140 start.go:223] Will wait 6m0s for node &{Name:m03 IP:172.27.137.67 Port:0 KubernetesVersion:v1.27.2 ContainerRuntime: ControlPlane:false Worker:true}
	I0524 19:42:46.400918    2140 out.go:177] * Verifying Kubernetes components...
	I0524 19:42:46.420564    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:42:46.441096    2140 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 19:42:46.441096    2140 kapi.go:59] client config for multinode-237000: &rest.Config{Host:"https://172.27.143.236:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\multinode-237000\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 19:42:46.442596    2140 node_ready.go:35] waiting up to 6m0s for node "multinode-237000-m03" to be "Ready" ...
	I0524 19:42:46.442596    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:46.442596    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:46.442596    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:46.443124    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:46.445898    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:42:46.446970    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:46.446993    2140 round_trippers.go:580]     Audit-Id: ba33d79a-6152-4ca4-9fb5-c04c3a128586
	I0524 19:42:46.446993    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:46.446993    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:46.447030    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:46.447030    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:46.447030    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:46 GMT
	I0524 19:42:46.447030    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1539","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 4211 chars]
	I0524 19:42:46.954754    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:46.955090    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:46.955090    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:46.955090    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:46.961554    2140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0524 19:42:46.961554    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:46.961554    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:46.961554    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:46.961554    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:46.961554    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:46 GMT
	I0524 19:42:46.961554    2140 round_trippers.go:580]     Audit-Id: 3ffce151-8385-48bc-a585-130071513d64
	I0524 19:42:46.961554    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:46.962438    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:47.454849    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:47.455112    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:47.455202    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:47.455202    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:47.459041    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:42:47.459041    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:47.459041    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:47.459041    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:47.460082    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:47.460082    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:47 GMT
	I0524 19:42:47.460082    2140 round_trippers.go:580]     Audit-Id: d4af3c6c-2024-49a9-89d9-3dc84da27701
	I0524 19:42:47.460082    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:47.460518    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:47.957357    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:47.957427    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:47.957427    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:47.957427    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:47.961982    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:47.961982    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:47.961982    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:47.961982    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:47.961982    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:47.962134    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:47.962134    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:47 GMT
	I0524 19:42:47.962134    2140 round_trippers.go:580]     Audit-Id: 1dd7ea67-b879-4acc-a8c6-4e5bdea9cfa7
	I0524 19:42:47.962234    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:48.461617    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:48.461617    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:48.461617    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:48.461746    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:48.465933    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:48.466023    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:48.466023    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:48.466023    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:48.466023    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:48.466098    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:48.466098    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:48 GMT
	I0524 19:42:48.466098    2140 round_trippers.go:580]     Audit-Id: 958f1519-2cbc-4781-8401-76f0e19a1225
	I0524 19:42:48.466384    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:48.466789    2140 node_ready.go:58] node "multinode-237000-m03" has status "Ready":"False"
	I0524 19:42:48.959794    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:48.959864    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:48.959864    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:48.959864    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:48.964536    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:48.964780    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:48.964780    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:48 GMT
	I0524 19:42:48.964780    2140 round_trippers.go:580]     Audit-Id: d29fb104-9164-4cbd-b64a-1e39fd096d87
	I0524 19:42:48.964780    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:48.964780    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:48.964780    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:48.964780    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:48.965223    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:49.461054    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:49.461119    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:49.461119    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:49.461119    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:49.468184    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:42:49.468184    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:49.468184    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:49.468184    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:49 GMT
	I0524 19:42:49.468184    2140 round_trippers.go:580]     Audit-Id: 5a4d0f52-25dc-4205-a519-8184f54be953
	I0524 19:42:49.468184    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:49.468184    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:49.468184    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:49.468184    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:49.948542    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:49.948601    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:49.948601    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:49.948601    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:49.957973    2140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0524 19:42:49.958057    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:49.958057    2140 round_trippers.go:580]     Audit-Id: b21f7452-1748-4357-abf3-ae5f07ffc281
	I0524 19:42:49.958057    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:49.958057    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:49.958057    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:49.958137    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:49.958137    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:49 GMT
	I0524 19:42:49.958319    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:50.448238    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:50.448290    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:50.448290    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:50.448290    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:50.452701    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:50.452701    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:50.452701    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:50.452701    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:50.453198    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:50 GMT
	I0524 19:42:50.453262    2140 round_trippers.go:580]     Audit-Id: d2e59117-c922-41e5-87ce-a470bcdcb616
	I0524 19:42:50.453262    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:50.453262    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:50.453546    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:50.953610    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:50.953610    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:50.953672    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:50.953672    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:50.957216    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:42:50.958244    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:50.958244    2140 round_trippers.go:580]     Audit-Id: 62a00298-3bed-4f73-a7b7-96ea884ebce2
	I0524 19:42:50.958244    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:50.958244    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:50.958244    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:50.958244    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:50.958314    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:50 GMT
	I0524 19:42:50.959071    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:50.959584    2140 node_ready.go:58] node "multinode-237000-m03" has status "Ready":"False"
	I0524 19:42:51.453282    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:51.453415    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:51.453415    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:51.453415    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:51.458205    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:51.458205    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:51.458205    2140 round_trippers.go:580]     Audit-Id: 130feece-141a-4ce0-811d-3ac9ed90b10a
	I0524 19:42:51.458511    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:51.458511    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:51.458511    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:51.458511    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:51.458511    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:51 GMT
	I0524 19:42:51.458855    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:51.957845    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:51.957910    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:51.957910    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:51.957974    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:51.965144    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:42:51.965144    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:51.965144    2140 round_trippers.go:580]     Audit-Id: 642dc60c-4f5a-44d8-82cf-66f4484809bd
	I0524 19:42:51.965144    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:51.965144    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:51.965144    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:51.965144    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:51.965144    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:51 GMT
	I0524 19:42:51.965144    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:52.456544    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:52.456604    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:52.456604    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:52.456604    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:52.460552    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:42:52.460552    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:52.460552    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:52 GMT
	I0524 19:42:52.460654    2140 round_trippers.go:580]     Audit-Id: 96efc36f-cdf1-47c9-a22d-9b9c57c7cd91
	I0524 19:42:52.460654    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:52.460654    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:52.460654    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:52.460654    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:52.460654    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:52.957993    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:52.957993    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:52.957993    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:52.957993    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:52.961598    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:42:52.962539    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:52.962539    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:52.962615    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:52 GMT
	I0524 19:42:52.962615    2140 round_trippers.go:580]     Audit-Id: f4c65a9e-df86-4999-aeff-d15e42b94b26
	I0524 19:42:52.962655    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:52.962669    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:52.962701    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:52.962884    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:52.963369    2140 node_ready.go:58] node "multinode-237000-m03" has status "Ready":"False"
	I0524 19:42:53.449881    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:53.449954    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:53.449954    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:53.449954    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:53.454769    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:53.454769    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:53.454769    2140 round_trippers.go:580]     Audit-Id: 438ab934-acdc-45c3-9b89-ba05ba9431fb
	I0524 19:42:53.454844    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:53.454844    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:53.454844    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:53.454844    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:53.454905    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:53 GMT
	I0524 19:42:53.455124    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:53.952762    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:53.952856    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:53.952856    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:53.952856    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:53.957066    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:42:53.957066    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:53.957140    2140 round_trippers.go:580]     Audit-Id: 7b75d8b1-d200-4b79-b44b-573dd9c79d2e
	I0524 19:42:53.957140    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:53.957140    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:53.957140    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:53.957140    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:53.957210    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:53 GMT
	I0524 19:42:53.957285    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1542","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 4320 chars]
	I0524 19:42:54.454222    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:54.454413    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:54.454475    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:54.454475    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:54.465866    2140 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0524 19:42:54.465866    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:54.465866    2140 round_trippers.go:580]     Audit-Id: 0eb4aefa-f026-4adf-b7df-33bb3be51784
	I0524 19:42:54.465866    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:54.465866    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:54.465866    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:54.465866    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:54.465866    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:54 GMT
	I0524 19:42:54.466215    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:54.950760    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:54.950760    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:54.950760    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:54.950760    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:54.955269    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:54.955316    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:54.955316    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:54 GMT
	I0524 19:42:54.955316    2140 round_trippers.go:580]     Audit-Id: cad2fb99-f618-45f3-b433-56081602204e
	I0524 19:42:54.955316    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:54.955316    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:54.955316    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:54.955316    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:54.955316    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:55.448117    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:55.448117    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:55.448117    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:55.448117    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:55.452742    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:55.452742    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:55.452742    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:55 GMT
	I0524 19:42:55.452742    2140 round_trippers.go:580]     Audit-Id: cb176a94-548f-4bbe-b844-997bc573a81b
	I0524 19:42:55.452742    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:55.452742    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:55.453571    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:55.453613    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:55.453848    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:55.454044    2140 node_ready.go:58] node "multinode-237000-m03" has status "Ready":"False"
	I0524 19:42:55.962750    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:55.962915    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:55.962915    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:55.962915    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:55.971251    2140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:42:55.971251    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:55.971251    2140 round_trippers.go:580]     Audit-Id: 9816df0d-a2a1-4d3a-bdef-4bb3f8b4ce5f
	I0524 19:42:55.971251    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:55.971251    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:55.971251    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:55.971251    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:55.971251    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:55 GMT
	I0524 19:42:55.971251    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:56.449378    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:56.449456    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:56.449456    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:56.449456    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:56.457706    2140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:42:56.457706    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:56.458635    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:56.458635    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:56.458635    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:56 GMT
	I0524 19:42:56.458635    2140 round_trippers.go:580]     Audit-Id: 6369e04a-de52-4876-89ad-a23050475be3
	I0524 19:42:56.458635    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:56.458635    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:56.459637    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:56.950057    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:56.950057    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:56.950265    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:56.950265    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:56.959361    2140 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0524 19:42:56.959361    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:56.959361    2140 round_trippers.go:580]     Audit-Id: 5d305f3e-1563-48d8-a4d7-05f3ed990462
	I0524 19:42:56.959361    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:56.959361    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:56.959361    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:56.959361    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:56.959361    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:56 GMT
	I0524 19:42:56.960329    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:57.462119    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:57.462119    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:57.462119    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:57.462119    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:57.465707    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:42:57.466709    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:57.466709    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:57.466709    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:57.466709    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:57 GMT
	I0524 19:42:57.466709    2140 round_trippers.go:580]     Audit-Id: 62578cac-d303-4edc-bddc-71d1a9452d3e
	I0524 19:42:57.466788    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:57.466788    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:57.466966    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:57.467401    2140 node_ready.go:58] node "multinode-237000-m03" has status "Ready":"False"
	I0524 19:42:57.949493    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:57.949493    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:57.949493    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:57.949493    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:57.955345    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:42:57.955345    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:57.955345    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:57.955345    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:57 GMT
	I0524 19:42:57.955428    2140 round_trippers.go:580]     Audit-Id: 9702df6f-b696-4561-a394-a190a5f2507c
	I0524 19:42:57.955428    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:57.955428    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:57.955428    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:57.955594    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:58.453441    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:58.453441    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:58.453441    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:58.453441    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:58.457640    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:58.458431    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:58.458486    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:58.458486    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:58 GMT
	I0524 19:42:58.458486    2140 round_trippers.go:580]     Audit-Id: 8cd2c42a-0e12-4141-b306-133af1bb154c
	I0524 19:42:58.458486    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:58.458486    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:58.458486    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:58.458486    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:58.956341    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:58.956341    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:58.956341    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:58.956341    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:58.959819    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:42:58.959819    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:58.959819    2140 round_trippers.go:580]     Audit-Id: 9827db21-7605-4635-a8b4-ab6484a9dad9
	I0524 19:42:58.959819    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:58.959819    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:58.960759    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:58.960759    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:58.960759    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:58 GMT
	I0524 19:42:58.960861    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:59.457299    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:59.457299    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:59.457376    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:59.457376    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:59.461706    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:59.461706    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:59.461706    2140 round_trippers.go:580]     Audit-Id: 5b464dd6-3564-43b4-b22d-081b0c7dfe7f
	I0524 19:42:59.461706    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:59.461970    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:59.461970    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:59.461970    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:59.461970    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:59 GMT
	I0524 19:42:59.462569    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:59.956449    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:42:59.956449    2140 round_trippers.go:469] Request Headers:
	I0524 19:42:59.956449    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:42:59.956449    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:42:59.961285    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:42:59.961285    2140 round_trippers.go:577] Response Headers:
	I0524 19:42:59.961285    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:42:59.961285    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:42:59.961285    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:42:59.961380    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:42:59 GMT
	I0524 19:42:59.961380    2140 round_trippers.go:580]     Audit-Id: 7a85dcea-4f0c-4c2e-9c59-b663bf9bdf41
	I0524 19:42:59.961380    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:42:59.961592    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:42:59.961737    2140 node_ready.go:58] node "multinode-237000-m03" has status "Ready":"False"
	I0524 19:43:00.449600    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:00.449669    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:00.449669    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:00.449669    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:00.453579    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:43:00.453579    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:00.453579    2140 round_trippers.go:580]     Audit-Id: ad60319b-e980-4354-a6e8-ebb77012e468
	I0524 19:43:00.454626    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:00.454626    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:00.454626    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:00.454662    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:00.454662    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:00 GMT
	I0524 19:43:00.454896    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:00.955471    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:00.955522    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:00.955556    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:00.955556    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:00.959688    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:00.959688    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:00.959688    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:00.959688    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:00.959688    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:00 GMT
	I0524 19:43:00.960609    2140 round_trippers.go:580]     Audit-Id: bf54971d-33bf-4742-9e40-f290ce47e491
	I0524 19:43:00.960609    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:00.960609    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:00.960842    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:01.457324    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:01.457405    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:01.457405    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:01.457472    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:01.461253    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:43:01.461593    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:01.461593    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:01.461593    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:01.461593    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:01 GMT
	I0524 19:43:01.461593    2140 round_trippers.go:580]     Audit-Id: 90d6a639-80d5-40cc-8f80-8c1906e563aa
	I0524 19:43:01.461688    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:01.461688    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:01.461948    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:01.951621    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:01.951746    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:01.951746    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:01.951854    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:01.957045    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:01.957045    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:01.957045    2140 round_trippers.go:580]     Audit-Id: 443716a2-e63c-4a81-8a7a-e564b8a50378
	I0524 19:43:01.957045    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:01.957045    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:01.957045    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:01.957045    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:01.957045    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:01 GMT
	I0524 19:43:01.957285    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:02.455237    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:02.455321    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:02.455321    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:02.455321    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:02.459576    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:02.459576    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:02.459576    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:02.459576    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:02 GMT
	I0524 19:43:02.459576    2140 round_trippers.go:580]     Audit-Id: 67892142-df9c-46fe-9472-285cfa72312f
	I0524 19:43:02.459576    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:02.459576    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:02.459576    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:02.459576    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:02.461019    2140 node_ready.go:58] node "multinode-237000-m03" has status "Ready":"False"
	I0524 19:43:02.960307    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:02.960371    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:02.960371    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:02.960371    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:02.964036    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:43:02.964036    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:02.964036    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:02.964036    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:02.964036    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:02 GMT
	I0524 19:43:02.964036    2140 round_trippers.go:580]     Audit-Id: 276357e3-5cae-4c21-ba2a-61c0ca2f0b16
	I0524 19:43:02.964951    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:02.964951    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:02.964951    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:03.448736    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:03.450608    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:03.450707    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:03.450707    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:03.456514    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:43:03.456514    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:03.456603    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:03.456603    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:03.456603    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:03.456603    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:03 GMT
	I0524 19:43:03.456663    2140 round_trippers.go:580]     Audit-Id: c61bc87c-70e5-4f84-8cb2-a67c74e0547c
	I0524 19:43:03.456663    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:03.456663    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:03.954870    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:03.954870    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:03.954870    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:03.954870    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:03.958881    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:03.958881    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:03.959119    2140 round_trippers.go:580]     Audit-Id: ce2088a4-4792-4cdf-999d-9427c8b47211
	I0524 19:43:03.959119    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:03.959119    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:03.959119    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:03.959176    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:03.959176    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:03 GMT
	I0524 19:43:03.959363    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:04.460897    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:04.460897    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:04.460897    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:04.460897    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:04.469534    2140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:43:04.469854    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:04.469904    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:04 GMT
	I0524 19:43:04.469904    2140 round_trippers.go:580]     Audit-Id: 21c7b8e6-9b44-48c9-9d03-616b525916f9
	I0524 19:43:04.469904    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:04.469904    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:04.469904    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:04.469904    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:04.470039    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:04.470039    2140 node_ready.go:58] node "multinode-237000-m03" has status "Ready":"False"
	I0524 19:43:04.949755    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:04.949798    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:04.949827    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:04.949827    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:04.955162    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:43:04.956129    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:04.956129    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:04.956129    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:04 GMT
	I0524 19:43:04.956129    2140 round_trippers.go:580]     Audit-Id: c0fa2fa0-d049-4fd5-bb39-d1b0aecbbcc3
	I0524 19:43:04.956129    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:04.956129    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:04.956129    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:04.956129    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:05.454368    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:05.454368    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:05.454368    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:05.454434    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:05.457728    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:43:05.458762    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:05.458762    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:05.458762    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:05.458762    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:05.458762    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:05.458832    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:05 GMT
	I0524 19:43:05.458832    2140 round_trippers.go:580]     Audit-Id: b9a75488-b5d5-4a5f-b461-ee27be87f876
	I0524 19:43:05.458941    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:05.956218    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:05.956218    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:05.956285    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:05.956285    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:05.960672    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:05.960764    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:05.960764    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:05.960764    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:05.960764    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:05.960764    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:05.960764    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:05 GMT
	I0524 19:43:05.960848    2140 round_trippers.go:580]     Audit-Id: f66a57a9-ff3f-40c0-b51e-f0c0e040a8bd
	I0524 19:43:05.961006    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:06.458110    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:06.458110    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:06.458110    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:06.458110    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:06.463335    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:06.463335    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:06.463335    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:06.463335    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:06.463426    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:06.463426    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:06.463426    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:06 GMT
	I0524 19:43:06.463426    2140 round_trippers.go:580]     Audit-Id: ede71ae9-97e5-4e43-b2c3-70c593fc73aa
	I0524 19:43:06.463498    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:06.961710    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:06.961710    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:06.961710    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:06.961710    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:06.965415    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:43:06.965415    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:06.965415    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:06.965913    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:06.965913    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:06.965913    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:06 GMT
	I0524 19:43:06.965913    2140 round_trippers.go:580]     Audit-Id: deaee3c0-eec6-4847-b78c-c1a2105eea69
	I0524 19:43:06.965913    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:06.966037    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1551","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4430 chars]
	I0524 19:43:06.966942    2140 node_ready.go:58] node "multinode-237000-m03" has status "Ready":"False"
	I0524 19:43:07.454025    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:07.454089    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.454089    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.454089    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.461256    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:43:07.461256    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.461256    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.461256    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.461256    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.461256    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.461256    2140 round_trippers.go:580]     Audit-Id: bfb67015-2a54-44d3-b464-2f81dda31c34
	I0524 19:43:07.461256    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.461256    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1579","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4296 chars]
	I0524 19:43:07.462623    2140 node_ready.go:49] node "multinode-237000-m03" has status "Ready":"True"
	I0524 19:43:07.462623    2140 node_ready.go:38] duration metric: took 21.0200381s waiting for node "multinode-237000-m03" to be "Ready" ...
	I0524 19:43:07.462623    2140 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:43:07.462623    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods
	I0524 19:43:07.462623    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.462623    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.462879    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.471200    2140 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0524 19:43:07.471200    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.471200    2140 round_trippers.go:580]     Audit-Id: 8bc25a45-0714-4eb6-a3ee-4d8bb32043d8
	I0524 19:43:07.471200    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.471200    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.471200    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.471200    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.471200    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.474225    2140 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1579"},"items":[{"metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1290","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82964 chars]
	I0524 19:43:07.478299    2140 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:07.478963    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-qhx48
	I0524 19:43:07.478963    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.478963    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.479061    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.482358    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:43:07.482358    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.482358    2140 round_trippers.go:580]     Audit-Id: 85ad92c0-ea65-48cf-9fbf-a64eecc5bb93
	I0524 19:43:07.482358    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.482358    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.482358    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.482358    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.482358    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.483355    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-qhx48","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"12d04c63-9898-4ccf-9e6d-92d8f3d086a4","resourceVersion":"1290","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"7d813336-1177-4460-bc66-3c1c082a3e71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7d813336-1177-4460-bc66-3c1c082a3e71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6494 chars]
	I0524 19:43:07.483355    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:43:07.483355    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.483355    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.484174    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.487975    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:43:07.488341    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.488341    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.488341    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.488341    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.488341    2140 round_trippers.go:580]     Audit-Id: b2b2a8ab-792b-40a8-addc-db8737ead77f
	I0524 19:43:07.488341    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.488341    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.488565    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:43:07.488959    2140 pod_ready.go:92] pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace has status "Ready":"True"
	I0524 19:43:07.489040    2140 pod_ready.go:81] duration metric: took 10.1309ms waiting for pod "coredns-5d78c9869d-qhx48" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:07.489040    2140 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:07.489217    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-237000
	I0524 19:43:07.489217    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.489269    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.489269    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.492389    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:43:07.492457    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.492457    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.492457    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.492457    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.492457    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.492457    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.492457    2140 round_trippers.go:580]     Audit-Id: 524de2ff-4b18-4a9d-b167-eed10aa1ce12
	I0524 19:43:07.492732    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-237000","namespace":"kube-system","uid":"4b73c6ae-c8c9-444c-a5b5-a4bb2e724689","resourceVersion":"1274","creationTimestamp":"2023-05-24T19:39:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.27.143.236:2379","kubernetes.io/config.hash":"a462e4d9e600aa9f863cde3f240bd69a","kubernetes.io/config.mirror":"a462e4d9e600aa9f863cde3f240bd69a","kubernetes.io/config.seen":"2023-05-24T19:39:40.956259078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:39:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 5873 chars]
	I0524 19:43:07.493234    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:43:07.493234    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.493234    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.493234    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.495420    2140 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0524 19:43:07.495420    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.495420    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.495420    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.495420    2140 round_trippers.go:580]     Audit-Id: c41adb51-5510-49b0-a960-49123f9dd827
	I0524 19:43:07.495420    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.495420    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.496325    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.496637    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:43:07.496637    2140 pod_ready.go:92] pod "etcd-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:43:07.496637    2140 pod_ready.go:81] duration metric: took 7.597ms waiting for pod "etcd-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:07.496637    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:07.496637    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-237000
	I0524 19:43:07.496637    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.496637    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.496637    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.501752    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:43:07.501843    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.501843    2140 round_trippers.go:580]     Audit-Id: c9b52b26-b91a-44a2-a20d-9c1298856aab
	I0524 19:43:07.501843    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.501843    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.501843    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.501843    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.501946    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.502131    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-237000","namespace":"kube-system","uid":"46721249-af81-40ba-b756-6f9def350d07","resourceVersion":"1248","creationTimestamp":"2023-05-24T19:39:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.27.143.236:8443","kubernetes.io/config.hash":"4278bfa912c61c7340a8d49488981a6d","kubernetes.io/config.mirror":"4278bfa912c61c7340a8d49488981a6d","kubernetes.io/config.seen":"2023-05-24T19:39:40.956261577Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:39:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 7409 chars]
	I0524 19:43:07.502631    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:43:07.502631    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.502715    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.502737    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.509731    2140 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0524 19:43:07.509731    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.509731    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.509731    2140 round_trippers.go:580]     Audit-Id: 6802c32a-c7c4-4662-9fe4-201fad9fbc6f
	I0524 19:43:07.509731    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.509731    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.509731    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.509731    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.509731    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:43:07.509731    2140 pod_ready.go:92] pod "kube-apiserver-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:43:07.509731    2140 pod_ready.go:81] duration metric: took 13.094ms waiting for pod "kube-apiserver-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:07.509731    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:07.509731    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-237000
	I0524 19:43:07.509731    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.509731    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.509731    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.514369    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:07.514369    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.514369    2140 round_trippers.go:580]     Audit-Id: be5b0eb4-60af-4b8e-890e-b68ced22cd98
	I0524 19:43:07.514369    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.514369    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.514369    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.514369    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.514369    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.514369    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-237000","namespace":"kube-system","uid":"1ff7b570-afe4-4076-989f-d0377d04f9d5","resourceVersion":"1273","creationTimestamp":"2023-05-24T19:27:10Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.mirror":"64b5c92760605da2056b367669d6fc80","kubernetes.io/config.seen":"2023-05-24T19:27:00.264375644Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7179 chars]
	I0524 19:43:07.515775    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:43:07.515775    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.515775    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.515775    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.521237    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:43:07.521237    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.521237    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.521237    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.521237    2140 round_trippers.go:580]     Audit-Id: c4bc2e60-5f2a-4a9b-b638-b762b5184771
	I0524 19:43:07.521237    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.521237    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.521986    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.522247    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:43:07.522916    2140 pod_ready.go:92] pod "kube-controller-manager-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:43:07.522990    2140 pod_ready.go:81] duration metric: took 13.259ms waiting for pod "kube-controller-manager-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:07.522990    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4qmlh" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:07.656295    2140 request.go:628] Waited for 133.1231ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qmlh
	I0524 19:43:07.656295    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qmlh
	I0524 19:43:07.656295    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.656295    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.656295    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.660894    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:07.660894    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.660894    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.660894    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.660894    2140 round_trippers.go:580]     Audit-Id: c26e1bf1-d080-41c1-a23b-6e107a0d9995
	I0524 19:43:07.661653    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.661653    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.661653    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.662061    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-4qmlh","generateName":"kube-proxy-","namespace":"kube-system","uid":"3c277e06-12a4-451c-ad5b-15cc2bd169ad","resourceVersion":"1571","creationTimestamp":"2023-05-24T19:32:20Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:32:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5745 chars]
	I0524 19:43:07.862252    2140 request.go:628] Waited for 199.5667ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:07.862252    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m03
	I0524 19:43:07.862589    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:07.862589    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:07.862589    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:07.866946    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:07.867184    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:07.867184    2140 round_trippers.go:580]     Audit-Id: 3497994e-34e2-44bb-944b-2adcab5428b3
	I0524 19:43:07.867184    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:07.867184    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:07.867184    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:07.867184    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:07.867184    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:07 GMT
	I0524 19:43:07.867394    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m03","uid":"c6b00769-bb95-4486-ad71-de5fb4a37461","resourceVersion":"1579","creationTimestamp":"2023-05-24T19:42:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:42:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4296 chars]
	I0524 19:43:07.867846    2140 pod_ready.go:92] pod "kube-proxy-4qmlh" in "kube-system" namespace has status "Ready":"True"
	I0524 19:43:07.867998    2140 pod_ready.go:81] duration metric: took 345.0076ms waiting for pod "kube-proxy-4qmlh" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:07.867998    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:08.066817    2140 request.go:628] Waited for 198.8196ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:43:08.066817    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-r6f94
	I0524 19:43:08.066817    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:08.066817    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:08.066817    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:08.072036    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:08.072036    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:08.072036    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:08.072036    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:08.072036    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:08.072036    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:08 GMT
	I0524 19:43:08.072036    2140 round_trippers.go:580]     Audit-Id: 47ecf571-e46f-4ada-85af-06f1c60b58fa
	I0524 19:43:08.072150    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:08.072327    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-r6f94","generateName":"kube-proxy-","namespace":"kube-system","uid":"90a232cf-33b3-4e3b-82bf-9050d39109d1","resourceVersion":"1243","creationTimestamp":"2023-05-24T19:27:24Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I0524 19:43:08.267958    2140 request.go:628] Waited for 195.297ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:43:08.268221    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:43:08.268221    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:08.268221    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:08.268221    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:08.271665    2140 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0524 19:43:08.272235    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:08.272235    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:08.272307    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:08.272349    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:08.272363    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:08.272363    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:08 GMT
	I0524 19:43:08.272363    2140 round_trippers.go:580]     Audit-Id: 8c9e1f37-d86c-4751-9830-1a893b9249d8
	I0524 19:43:08.272551    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:43:08.272819    2140 pod_ready.go:92] pod "kube-proxy-r6f94" in "kube-system" namespace has status "Ready":"True"
	I0524 19:43:08.272819    2140 pod_ready.go:81] duration metric: took 404.8212ms waiting for pod "kube-proxy-r6f94" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:08.272819    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zglzj" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:08.455031    2140 request.go:628] Waited for 181.9732ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zglzj
	I0524 19:43:08.455101    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zglzj
	I0524 19:43:08.455101    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:08.455101    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:08.455101    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:08.460691    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:43:08.461159    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:08.461210    2140 round_trippers.go:580]     Audit-Id: e80036bc-613b-4225-8436-5c374a6cf65f
	I0524 19:43:08.461210    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:08.461235    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:08.461235    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:08.461235    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:08.461296    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:08 GMT
	I0524 19:43:08.461788    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-zglzj","generateName":"kube-proxy-","namespace":"kube-system","uid":"af1fb911-5877-4bcc-92f4-5571f489122c","resourceVersion":"1419","creationTimestamp":"2023-05-24T19:29:22Z","labels":{"controller-revision-hash":"8bdf7b6c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"b56fd4e2-14cc-4023-9d9d-258e72fae527","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:29:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b56fd4e2-14cc-4023-9d9d-258e72fae527\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5745 chars]
	I0524 19:43:08.659393    2140 request.go:628] Waited for 196.7542ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:43:08.660426    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000-m02
	I0524 19:43:08.662709    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:08.662709    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:08.662709    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:08.667021    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:08.667021    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:08.667021    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:08.667021    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:08 GMT
	I0524 19:43:08.667021    2140 round_trippers.go:580]     Audit-Id: 017503c3-6240-4c0f-9ce0-b3e5e97a382a
	I0524 19:43:08.667021    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:08.667021    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:08.667021    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:08.668055    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000-m02","uid":"6734a554-9e3b-4956-bee3-a58cca9d1d83","resourceVersion":"1443","creationTimestamp":"2023-05-24T19:41:22Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:41:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 4345 chars]
	I0524 19:43:08.668615    2140 pod_ready.go:92] pod "kube-proxy-zglzj" in "kube-system" namespace has status "Ready":"True"
	I0524 19:43:08.668685    2140 pod_ready.go:81] duration metric: took 395.7963ms waiting for pod "kube-proxy-zglzj" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:08.668685    2140 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:08.863077    2140 request.go:628] Waited for 194.3203ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:43:08.863410    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-237000
	I0524 19:43:08.863410    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:08.863495    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:08.863534    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:08.871441    2140 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0524 19:43:08.871441    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:08.871441    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:08.871441    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:08.871441    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:08.871441    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:08 GMT
	I0524 19:43:08.871441    2140 round_trippers.go:580]     Audit-Id: 4d36a15c-bf32-4e74-9a2e-a81898d8a692
	I0524 19:43:08.871441    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:08.871441    2140 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-237000","namespace":"kube-system","uid":"a55c419f-1b04-4895-9fd5-02dd67cd888f","resourceVersion":"1252","creationTimestamp":"2023-05-24T19:27:12Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.mirror":"b26a06953be724b5f34183ed712fbb3d","kubernetes.io/config.seen":"2023-05-24T19:27:12.143961333Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-05-24T19:27:12Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4909 chars]
	I0524 19:43:09.063658    2140 request.go:628] Waited for 191.5028ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:43:09.063658    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes/multinode-237000
	I0524 19:43:09.063658    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:09.063658    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:09.063658    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:09.068711    2140 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0524 19:43:09.068794    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:09.068794    2140 round_trippers.go:580]     Audit-Id: cb443553-1fa1-4df2-bc95-1eddd5567007
	I0524 19:43:09.068794    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:09.068794    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:09.068794    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:09.068794    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:09.068861    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:09 GMT
	I0524 19:43:09.069067    2140 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-05-24T19:27:07Z","fieldsType":"FieldsV1","f [truncated 5240 chars]
	I0524 19:43:09.069095    2140 pod_ready.go:92] pod "kube-scheduler-multinode-237000" in "kube-system" namespace has status "Ready":"True"
	I0524 19:43:09.069095    2140 pod_ready.go:81] duration metric: took 400.4101ms waiting for pod "kube-scheduler-multinode-237000" in "kube-system" namespace to be "Ready" ...
	I0524 19:43:09.069095    2140 pod_ready.go:38] duration metric: took 1.6064724s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 19:43:09.069095    2140 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 19:43:09.084378    2140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:43:09.107166    2140 system_svc.go:56] duration metric: took 38.0712ms WaitForService to wait for kubelet.
	I0524 19:43:09.107166    2140 kubeadm.go:581] duration metric: took 22.7103299s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 19:43:09.107166    2140 node_conditions.go:102] verifying NodePressure condition ...
	I0524 19:43:09.265317    2140 request.go:628] Waited for 157.9446ms due to client-side throttling, not priority and fairness, request: GET:https://172.27.143.236:8443/api/v1/nodes
	I0524 19:43:09.265317    2140 round_trippers.go:463] GET https://172.27.143.236:8443/api/v1/nodes
	I0524 19:43:09.265317    2140 round_trippers.go:469] Request Headers:
	I0524 19:43:09.265317    2140 round_trippers.go:473]     Accept: application/json, */*
	I0524 19:43:09.265317    2140 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0524 19:43:09.269737    2140 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0524 19:43:09.270084    2140 round_trippers.go:577] Response Headers:
	I0524 19:43:09.270084    2140 round_trippers.go:580]     Audit-Id: d5067563-2294-4379-a28b-cc55ecc84157
	I0524 19:43:09.270084    2140 round_trippers.go:580]     Cache-Control: no-cache, private
	I0524 19:43:09.270084    2140 round_trippers.go:580]     Content-Type: application/json
	I0524 19:43:09.270084    2140 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 03eec758-d47b-4175-bd6a-134d3be7baed
	I0524 19:43:09.270084    2140 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 5b6ce086-1ba2-40df-ae0a-6e58b0e6e689
	I0524 19:43:09.270155    2140 round_trippers.go:580]     Date: Wed, 24 May 2023 19:43:09 GMT
	I0524 19:43:09.271434    2140 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1580"},"items":[{"metadata":{"name":"multinode-237000","uid":"711c2a37-869f-4744-acb0-c2b2d7c34061","resourceVersion":"1262","creationTimestamp":"2023-05-24T19:27:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-237000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e","minikube.k8s.io/name":"multinode-237000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_05_24T19_27_13_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15919 chars]
	I0524 19:43:09.272498    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:43:09.272498    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:43:09.272498    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:43:09.272498    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:43:09.272498    2140 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 19:43:09.272498    2140 node_conditions.go:123] node cpu capacity is 2
	I0524 19:43:09.272498    2140 node_conditions.go:105] duration metric: took 165.3316ms to run NodePressure ...
	I0524 19:43:09.272498    2140 start.go:228] waiting for startup goroutines ...
	I0524 19:43:09.272498    2140 start.go:242] writing updated cluster config ...
	I0524 19:43:09.288680    2140 ssh_runner.go:195] Run: rm -f paused
	I0524 19:43:09.501679    2140 start.go:568] kubectl: 1.18.2, cluster: 1.27.2 (minor skew: 9)
	I0524 19:43:09.504270    2140 out.go:177] 
	W0524 19:43:09.507406    2140 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 19:43:09.513693    2140 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 19:43:09.518912    2140 out.go:177] * Done! kubectl is now configured to use "multinode-237000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 19:38:53 UTC, ends at Wed 2023-05-24 19:43:19 UTC. --
	May 24 19:40:06 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:06.381959738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:40:06 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:06.382000735Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:40:06 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:06.382019534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:40:06 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:06.400184755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:40:06 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:06.400449839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:40:06 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:06.400568532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:40:06 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:06.400596631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:40:07 multinode-237000 cri-dockerd[1218]: time="2023-05-24T19:40:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ad7642112aa94466071453e6f567b007ce3c20bbaa66a5bec8a675b507ebbf5a/resolv.conf as [nameserver 172.27.128.1]"
	May 24 19:40:07 multinode-237000 cri-dockerd[1218]: time="2023-05-24T19:40:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c36a90d64f44cbe3e7b813b0552fe87edc4244f7d08f14a5fe638740b5385cd/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 24 19:40:07 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:07.544374607Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:40:07 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:07.545144064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:40:07 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:07.545281056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:40:07 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:07.545639836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:40:07 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:07.546595783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:40:07 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:07.546775573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:40:07 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:07.546817171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:40:07 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:07.546835870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:40:22 multinode-237000 dockerd[1007]: time="2023-05-24T19:40:22.818498076Z" level=info msg="ignoring event" container=777f8c6ebde34191b4cc66bb27d99f75fc5dc837353e1b61c2d7810a04d2a1f6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 24 19:40:22 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:22.818789970Z" level=info msg="shim disconnected" id=777f8c6ebde34191b4cc66bb27d99f75fc5dc837353e1b61c2d7810a04d2a1f6 namespace=moby
	May 24 19:40:22 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:22.818859868Z" level=warning msg="cleaning up after shim disconnected" id=777f8c6ebde34191b4cc66bb27d99f75fc5dc837353e1b61c2d7810a04d2a1f6 namespace=moby
	May 24 19:40:22 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:22.818871768Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 19:40:35 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:35.267446368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 19:40:35 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:35.267829365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 19:40:35 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:35.267914964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 19:40:35 multinode-237000 dockerd[1013]: time="2023-05-24T19:40:35.268146462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	57149051a83ba       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   47a29e76a2685
	39def610dc23f       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   0c36a90d64f44
	3bfd6b969c2bd       ead0a4a53df89                                                                                         3 minutes ago       Running             coredns                   1                   ad7642112aa94
	c30c1bc19435a       b0b1fa0f58c6e                                                                                         3 minutes ago       Running             kindnet-cni               1                   3eb1955f427fa
	777f8c6ebde34       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   47a29e76a2685
	aa3ebd9473a9c       b8aa50768fd67                                                                                         3 minutes ago       Running             kube-proxy                1                   48930abd6b8c8
	6c45eb5605de0       86b6af7dd652c                                                                                         3 minutes ago       Running             etcd                      0                   bbcd687984acc
	1fa5d98da31a2       89e70da428d29                                                                                         3 minutes ago       Running             kube-scheduler            1                   7611a20eab1aa
	9bce941b0c255       ac2b7465ebba9                                                                                         3 minutes ago       Running             kube-controller-manager   1                   6a7043059e27a
	49c5c747671da       c5b13e4f7806d                                                                                         3 minutes ago       Running             kube-apiserver            0                   97c5207fa79c0
	914b54caf4688       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Exited              busybox                   0                   44b4e40976026
	0be0b91d64125       ead0a4a53df89                                                                                         15 minutes ago      Exited              coredns                   0                   7975ebab5fd50
	a5f82b77134ca       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974              15 minutes ago      Exited              kindnet-cni               0                   0dcfd5ea3653b
	9e3f0057f97c2       b8aa50768fd67                                                                                         15 minutes ago      Exited              kube-proxy                0                   eca8b08a45760
	bde0fe1b24588       89e70da428d29                                                                                         16 minutes ago      Exited              kube-scheduler            0                   4d0c225625eb3
	c29b9004260c0       ac2b7465ebba9                                                                                         16 minutes ago      Exited              kube-controller-manager   0                   0c8db54a682ad
	
	* 
	* ==> coredns [0be0b91d6412] <==
	* [INFO] 10.244.1.2:54283 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000137801s
	[INFO] 10.244.1.2:52452 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000174501s
	[INFO] 10.244.1.2:43678 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000049501s
	[INFO] 10.244.1.2:40107 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000109301s
	[INFO] 10.244.1.2:38099 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000673s
	[INFO] 10.244.1.2:54570 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000056101s
	[INFO] 10.244.1.2:48943 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000059s
	[INFO] 10.244.0.3:51704 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000841s
	[INFO] 10.244.0.3:59397 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056801s
	[INFO] 10.244.0.3:36461 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000493s
	[INFO] 10.244.0.3:52950 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000052s
	[INFO] 10.244.1.2:56683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155301s
	[INFO] 10.244.1.2:38488 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000898s
	[INFO] 10.244.1.2:56116 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000060901s
	[INFO] 10.244.1.2:42911 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000177601s
	[INFO] 10.244.0.3:39964 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001114s
	[INFO] 10.244.0.3:35026 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000286702s
	[INFO] 10.244.0.3:57544 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000221901s
	[INFO] 10.244.0.3:47379 - 5 "PTR IN 1.128.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000443903s
	[INFO] 10.244.1.2:41971 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125701s
	[INFO] 10.244.1.2:34141 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128201s
	[INFO] 10.244.1.2:52982 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000613s
	[INFO] 10.244.1.2:58165 - 5 "PTR IN 1.128.27.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000549s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [3bfd6b969c2b] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = e68c1f66d66a8b21178767f77ec9bbf4538be12549e49c63ad565269f31e317fbc64a6eb8980e12bd093747c3f544a0bc7c04266dffb836ae54229446b5ea471
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51468 - 12958 "HINFO IN 4911287048628147480.7847454982074672212. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.061818573s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-237000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-237000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=multinode-237000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T19_27_13_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 19:27:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-237000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:43:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:40:01 +0000   Wed, 24 May 2023 19:27:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:40:01 +0000   Wed, 24 May 2023 19:27:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:40:01 +0000   Wed, 24 May 2023 19:27:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:40:01 +0000   Wed, 24 May 2023 19:40:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.143.236
	  Hostname:    multinode-237000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 c82e8ca1d3b04f689b362fc79d934cab
	  System UUID:                a1fd074e-9d37-804e-9507-e627f053ff31
	  Boot ID:                    8cee6636-e963-4b18-aa98-d72964977b4f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-9t5bp                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5d78c9869d-qhx48                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-multinode-237000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m29s
	  kube-system                 kindnet-xgkpb                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-multinode-237000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-controller-manager-multinode-237000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-r6f94                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-multinode-237000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 3m26s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                    kubelet          Node multinode-237000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                    kubelet          Node multinode-237000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                    kubelet          Node multinode-237000 status is now: NodeHasSufficientPID
	  Normal  Starting                 16m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           15m                    node-controller  Node multinode-237000 event: Registered Node multinode-237000 in Controller
	  Normal  NodeReady                15m                    kubelet          Node multinode-237000 status is now: NodeReady
	  Normal  Starting                 3m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m38s (x8 over 3m38s)  kubelet          Node multinode-237000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m38s (x8 over 3m38s)  kubelet          Node multinode-237000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m38s (x7 over 3m38s)  kubelet          Node multinode-237000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m18s                  node-controller  Node multinode-237000 event: Registered Node multinode-237000 in Controller
	
	
	Name:               multinode-237000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-237000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 19:41:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-237000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:43:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:41:32 +0000   Wed, 24 May 2023 19:41:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:41:32 +0000   Wed, 24 May 2023 19:41:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:41:32 +0000   Wed, 24 May 2023 19:41:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:41:32 +0000   Wed, 24 May 2023 19:41:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.142.80
	  Hostname:    multinode-237000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 873113d9515c4491aa8995a73cbfbbba
	  System UUID:                d6e2dfd5-eaf1-6e40-9a4d-231923fae672
	  Boot ID:                    0d8a6ce9-fcb8-4040-acd5-ddd8e5c8dbd4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-s5cj7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kindnet-9g7mc              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-zglzj           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 13m                  kube-proxy       
	  Normal  Starting                 114s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    13m (x2 over 13m)    kubelet          Node multinode-237000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x2 over 13m)    kubelet          Node multinode-237000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x2 over 13m)    kubelet          Node multinode-237000-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 13m                  kubelet          Starting kubelet.
	  Normal  NodeReady                13m                  kubelet          Node multinode-237000-m02 status is now: NodeReady
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x2 over 117s)  kubelet          Node multinode-237000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x2 over 117s)  kubelet          Node multinode-237000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x2 over 117s)  kubelet          Node multinode-237000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           113s                 node-controller  Node multinode-237000-m02 event: Registered Node multinode-237000-m02 in Controller
	  Normal  NodeReady                107s                 kubelet          Node multinode-237000-m02 status is now: NodeReady
	
	
	Name:               multinode-237000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-237000-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 19:42:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-237000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 19:43:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 19:43:07 +0000   Wed, 24 May 2023 19:42:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 19:43:07 +0000   Wed, 24 May 2023 19:42:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 19:43:07 +0000   Wed, 24 May 2023 19:42:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 19:43:07 +0000   Wed, 24 May 2023 19:43:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.137.67
	  Hostname:    multinode-237000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 608a3e9b866b4ae4a018c52f7ee4866c
	  System UUID:                4882cb3b-5d4d-2b4d-922a-9ae863b687d6
	  Boot ID:                    62f6bbf6-8847-45f2-8eda-8c16dd9c33c3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fzbwb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-4qmlh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 10m                    kube-proxy  
	  Normal  Starting                 18s                    kube-proxy  
	  Normal  Starting                 6m11s                  kube-proxy  
	  Normal  Starting                 10m                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)      kubelet     Node multinode-237000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)      kubelet     Node multinode-237000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)      kubelet     Node multinode-237000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                10m                    kubelet     Node multinode-237000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     6m14s (x2 over 6m14s)  kubelet     Node multinode-237000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    6m14s (x2 over 6m14s)  kubelet     Node multinode-237000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  6m14s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m14s (x2 over 6m14s)  kubelet     Node multinode-237000-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 6m14s                  kubelet     Starting kubelet.
	  Normal  NodeReady                6m6s                   kubelet     Node multinode-237000-m03 status is now: NodeReady
	  Normal  Starting                 35s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x2 over 35s)      kubelet     Node multinode-237000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x2 over 35s)      kubelet     Node multinode-237000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x2 over 35s)      kubelet     Node multinode-237000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  35s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                12s                    kubelet     Node multinode-237000-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	*               on the kernel command line
	[  +0.000076] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.145349] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.735284] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.318820] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[May24 19:39] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +17.650899] systemd-fstab-generator[640]: Ignoring "noauto" for root device
	[  +0.186518] systemd-fstab-generator[651]: Ignoring "noauto" for root device
	[ +11.476714] systemd-fstab-generator[935]: Ignoring "noauto" for root device
	[  +0.605604] systemd-fstab-generator[974]: Ignoring "noauto" for root device
	[  +0.175777] systemd-fstab-generator[985]: Ignoring "noauto" for root device
	[  +0.205129] systemd-fstab-generator[998]: Ignoring "noauto" for root device
	[  +1.493835] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.437194] systemd-fstab-generator[1163]: Ignoring "noauto" for root device
	[  +0.181823] systemd-fstab-generator[1174]: Ignoring "noauto" for root device
	[  +0.183995] systemd-fstab-generator[1185]: Ignoring "noauto" for root device
	[  +0.174938] systemd-fstab-generator[1196]: Ignoring "noauto" for root device
	[  +0.220933] systemd-fstab-generator[1210]: Ignoring "noauto" for root device
	[  +3.896409] systemd-fstab-generator[1435]: Ignoring "noauto" for root device
	[  +1.012087] kauditd_printk_skb: 29 callbacks suppressed
	[ +13.189459] hrtimer: interrupt took 2212915 ns
	[May24 19:40] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [6c45eb5605de] <==
	* {"level":"info","ts":"2023-05-24T19:39:45.470Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-24T19:39:45.470Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-24T19:39:45.470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26341a2374e1221d switched to configuration voters=(2752854011817304605)"}
	{"level":"info","ts":"2023-05-24T19:39:45.472Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"730cc898a550eb9b","local-member-id":"26341a2374e1221d","added-peer-id":"26341a2374e1221d","added-peer-peer-urls":["https://172.27.130.107:2380"]}
	{"level":"info","ts":"2023-05-24T19:39:45.475Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"730cc898a550eb9b","local-member-id":"26341a2374e1221d","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:39:45.476Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T19:39:45.498Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-24T19:39:45.504Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"26341a2374e1221d","initial-advertise-peer-urls":["https://172.27.143.236:2380"],"listen-peer-urls":["https://172.27.143.236:2380"],"advertise-client-urls":["https://172.27.143.236:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.143.236:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-24T19:39:45.504Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-24T19:39:45.505Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.27.143.236:2380"}
	{"level":"info","ts":"2023-05-24T19:39:45.510Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.27.143.236:2380"}
	{"level":"info","ts":"2023-05-24T19:39:46.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26341a2374e1221d is starting a new election at term 2"}
	{"level":"info","ts":"2023-05-24T19:39:46.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26341a2374e1221d became pre-candidate at term 2"}
	{"level":"info","ts":"2023-05-24T19:39:46.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26341a2374e1221d received MsgPreVoteResp from 26341a2374e1221d at term 2"}
	{"level":"info","ts":"2023-05-24T19:39:46.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26341a2374e1221d became candidate at term 3"}
	{"level":"info","ts":"2023-05-24T19:39:46.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26341a2374e1221d received MsgVoteResp from 26341a2374e1221d at term 3"}
	{"level":"info","ts":"2023-05-24T19:39:46.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"26341a2374e1221d became leader at term 3"}
	{"level":"info","ts":"2023-05-24T19:39:46.882Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 26341a2374e1221d elected leader 26341a2374e1221d at term 3"}
	{"level":"info","ts":"2023-05-24T19:39:46.887Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"26341a2374e1221d","local-member-attributes":"{Name:multinode-237000 ClientURLs:[https://172.27.143.236:2379]}","request-path":"/0/members/26341a2374e1221d/attributes","cluster-id":"730cc898a550eb9b","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T19:39:46.887Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T19:39:46.887Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T19:39:46.890Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T19:39:46.890Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.27.143.236:2379"}
	{"level":"info","ts":"2023-05-24T19:39:46.892Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T19:39:46.892Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:43:19 up 4 min,  0 users,  load average: 1.09, 1.04, 0.47
	Linux multinode-237000 5.10.57 #1 SMP Sat May 20 03:22:25 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [a5f82b77134c] <==
	* I0524 19:36:56.200577       1 main.go:223] Handling node with IPs: map[172.27.130.107:{}]
	I0524 19:36:56.200663       1 main.go:227] handling current node
	I0524 19:36:56.200678       1 main.go:223] Handling node with IPs: map[172.27.128.127:{}]
	I0524 19:36:56.200686       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:36:56.201115       1 main.go:223] Handling node with IPs: map[172.27.132.18:{}]
	I0524 19:36:56.201205       1 main.go:250] Node multinode-237000-m03 has CIDR [10.244.2.0/24] 
	I0524 19:37:06.208370       1 main.go:223] Handling node with IPs: map[172.27.130.107:{}]
	I0524 19:37:06.208572       1 main.go:227] handling current node
	I0524 19:37:06.208588       1 main.go:223] Handling node with IPs: map[172.27.128.127:{}]
	I0524 19:37:06.208597       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:37:06.209710       1 main.go:223] Handling node with IPs: map[172.27.134.200:{}]
	I0524 19:37:06.210013       1 main.go:250] Node multinode-237000-m03 has CIDR [10.244.3.0/24] 
	I0524 19:37:06.210163       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.27.134.200 Flags: [] Table: 0} 
	I0524 19:37:16.224766       1 main.go:223] Handling node with IPs: map[172.27.130.107:{}]
	I0524 19:37:16.224962       1 main.go:227] handling current node
	I0524 19:37:16.224989       1 main.go:223] Handling node with IPs: map[172.27.128.127:{}]
	I0524 19:37:16.225006       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:37:16.225860       1 main.go:223] Handling node with IPs: map[172.27.134.200:{}]
	I0524 19:37:16.225971       1 main.go:250] Node multinode-237000-m03 has CIDR [10.244.3.0/24] 
	I0524 19:37:26.237575       1 main.go:223] Handling node with IPs: map[172.27.130.107:{}]
	I0524 19:37:26.237615       1 main.go:227] handling current node
	I0524 19:37:26.237628       1 main.go:223] Handling node with IPs: map[172.27.128.127:{}]
	I0524 19:37:26.237635       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:37:26.237761       1 main.go:223] Handling node with IPs: map[172.27.134.200:{}]
	I0524 19:37:26.237769       1 main.go:250] Node multinode-237000-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kindnet [c30c1bc19435] <==
	* I0524 19:42:47.388661       1 main.go:223] Handling node with IPs: map[172.27.143.236:{}]
	I0524 19:42:47.388753       1 main.go:227] handling current node
	I0524 19:42:47.388769       1 main.go:223] Handling node with IPs: map[172.27.142.80:{}]
	I0524 19:42:47.388778       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:42:47.388902       1 main.go:223] Handling node with IPs: map[172.27.137.67:{}]
	I0524 19:42:47.388914       1 main.go:250] Node multinode-237000-m03 has CIDR [10.244.2.0/24] 
	I0524 19:42:47.388968       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.27.137.67 Flags: [] Table: 0} 
	I0524 19:42:57.405861       1 main.go:223] Handling node with IPs: map[172.27.143.236:{}]
	I0524 19:42:57.406043       1 main.go:227] handling current node
	I0524 19:42:57.406077       1 main.go:223] Handling node with IPs: map[172.27.142.80:{}]
	I0524 19:42:57.406296       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:42:57.406606       1 main.go:223] Handling node with IPs: map[172.27.137.67:{}]
	I0524 19:42:57.406643       1 main.go:250] Node multinode-237000-m03 has CIDR [10.244.2.0/24] 
	I0524 19:43:07.419783       1 main.go:223] Handling node with IPs: map[172.27.143.236:{}]
	I0524 19:43:07.419890       1 main.go:227] handling current node
	I0524 19:43:07.419906       1 main.go:223] Handling node with IPs: map[172.27.142.80:{}]
	I0524 19:43:07.419971       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:43:07.420182       1 main.go:223] Handling node with IPs: map[172.27.137.67:{}]
	I0524 19:43:07.420199       1 main.go:250] Node multinode-237000-m03 has CIDR [10.244.2.0/24] 
	I0524 19:43:17.436501       1 main.go:223] Handling node with IPs: map[172.27.143.236:{}]
	I0524 19:43:17.436611       1 main.go:227] handling current node
	I0524 19:43:17.436628       1 main.go:223] Handling node with IPs: map[172.27.142.80:{}]
	I0524 19:43:17.436638       1 main.go:250] Node multinode-237000-m02 has CIDR [10.244.1.0/24] 
	I0524 19:43:17.436758       1 main.go:223] Handling node with IPs: map[172.27.137.67:{}]
	I0524 19:43:17.436772       1 main.go:250] Node multinode-237000-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kube-apiserver [49c5c747671d] <==
	* I0524 19:39:48.934772       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0524 19:39:48.934921       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0524 19:39:48.938823       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0524 19:39:48.938860       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0524 19:39:49.064824       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0524 19:39:49.072924       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0524 19:39:49.075202       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0524 19:39:49.075578       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0524 19:39:49.075772       1 shared_informer.go:318] Caches are synced for configmaps
	I0524 19:39:49.091022       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0524 19:39:49.091125       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0524 19:39:49.096044       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0524 19:39:49.099205       1 cache.go:39] Caches are synced for autoregister controller
	I0524 19:39:49.139329       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0524 19:39:49.450214       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0524 19:39:49.890489       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0524 19:39:50.590434       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.27.130.107 172.27.143.236]
	I0524 19:39:50.593782       1 controller.go:624] quota admission added evaluator for: endpoints
	I0524 19:39:50.611912       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0524 19:39:53.155908       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0524 19:39:53.421142       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0524 19:39:53.443005       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0524 19:39:53.572898       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0524 19:39:53.589382       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0524 19:40:10.595795       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.27.143.236]
	
	* 
	* ==> kube-controller-manager [9bce941b0c25] <==
	* I0524 19:40:02.064325       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0524 19:40:02.064298       1 shared_informer.go:318] Caches are synced for garbage collector
	W0524 19:40:41.610878       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	I0524 19:40:41.611927       1 event.go:307] "Event occurred" object="multinode-237000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-237000-m03 status is now: NodeNotReady"
	I0524 19:40:41.626418       1 event.go:307] "Event occurred" object="multinode-237000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-237000-m02 status is now: NodeNotReady"
	I0524 19:40:41.645979       1 event.go:307] "Event occurred" object="kube-system/kindnet-fzbwb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0524 19:40:41.653779       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-tdzj2" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0524 19:40:41.678041       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-4qmlh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0524 19:40:41.682611       1 event.go:307] "Event occurred" object="kube-system/kindnet-9g7mc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0524 19:40:41.713150       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-zglzj" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0524 19:41:17.712963       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-s5cj7"
	E0524 19:41:21.531700       1 gc_controller.go:156] failed to get node multinode-237000-m02 : node "multinode-237000-m02" not found
	I0524 19:41:21.724021       1 event.go:307] "Event occurred" object="multinode-237000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-237000-m02 event: Removing Node multinode-237000-m02 from Controller"
	I0524 19:41:22.577559       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-tdzj2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-tdzj2"
	I0524 19:41:22.577626       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-237000-m02\" does not exist"
	I0524 19:41:22.594651       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-237000-m02" podCIDRs=[10.244.1.0/24]
	I0524 19:41:26.725448       1 event.go:307] "Event occurred" object="multinode-237000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-237000-m02 event: Registered Node multinode-237000-m02 in Controller"
	W0524 19:41:32.420739       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	I0524 19:41:36.756799       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-tdzj2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-tdzj2"
	I0524 19:41:36.757789       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-s5cj7" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-s5cj7"
	W0524 19:42:42.397366       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	W0524 19:42:44.277681       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	I0524 19:42:44.277766       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-237000-m03\" does not exist"
	I0524 19:42:44.315278       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-237000-m03" podCIDRs=[10.244.2.0/24]
	W0524 19:43:07.019689       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	
	* 
	* ==> kube-controller-manager [c29b9004260c] <==
	* I0524 19:29:23.803345       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-237000-m02"
	I0524 19:29:23.803692       1 event.go:307] "Event occurred" object="multinode-237000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-237000-m02 event: Registered Node multinode-237000-m02 in Controller"
	W0524 19:29:36.888965       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	I0524 19:29:49.544209       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0524 19:29:49.575234       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-tdzj2"
	I0524 19:29:49.598757       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-9t5bp"
	W0524 19:32:20.229647       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	I0524 19:32:20.231394       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-237000-m03\" does not exist"
	I0524 19:32:20.281716       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-237000-m03" podCIDRs=[10.244.2.0/24]
	I0524 19:32:20.284900       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fzbwb"
	I0524 19:32:20.288033       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-4qmlh"
	I0524 19:32:23.865842       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-237000-m03"
	I0524 19:32:23.865926       1 event.go:307] "Event occurred" object="multinode-237000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-237000-m03 event: Registered Node multinode-237000-m03 in Controller"
	W0524 19:32:35.203047       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	W0524 19:36:08.968622       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	I0524 19:36:08.975754       1 event.go:307] "Event occurred" object="multinode-237000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-237000-m03 status is now: NodeNotReady"
	I0524 19:36:09.002841       1 event.go:307] "Event occurred" object="kube-system/kindnet-fzbwb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0524 19:36:09.020087       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-4qmlh" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	W0524 19:37:03.802915       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	I0524 19:37:04.030365       1 event.go:307] "Event occurred" object="multinode-237000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-237000-m03 event: Removing Node multinode-237000-m03 from Controller"
	W0524 19:37:05.212729       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	I0524 19:37:05.214806       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-237000-m03\" does not exist"
	I0524 19:37:05.234760       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-237000-m03" podCIDRs=[10.244.3.0/24]
	I0524 19:37:09.031558       1 event.go:307] "Event occurred" object="multinode-237000-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-237000-m03 event: Registered Node multinode-237000-m03 in Controller"
	W0524 19:37:13.610506       1 topologycache.go:232] Can't get CPU or zone information for multinode-237000-m02 node
	
	* 
	* ==> kube-proxy [9e3f0057f97c] <==
	* I0524 19:27:26.148168       1 node.go:141] Successfully retrieved node IP: 172.27.130.107
	I0524 19:27:26.148365       1 server_others.go:110] "Detected node IP" address="172.27.130.107"
	I0524 19:27:26.149015       1 server_others.go:551] "Using iptables proxy"
	I0524 19:27:26.268276       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 19:27:26.268378       1 server_others.go:190] "Using iptables Proxier"
	I0524 19:27:26.268460       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 19:27:26.269189       1 server.go:657] "Version info" version="v1.27.2"
	I0524 19:27:26.269337       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 19:27:26.270965       1 config.go:188] "Starting service config controller"
	I0524 19:27:26.271009       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 19:27:26.271033       1 config.go:97] "Starting endpoint slice config controller"
	I0524 19:27:26.271041       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 19:27:26.273992       1 config.go:315] "Starting node config controller"
	I0524 19:27:26.274190       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 19:27:26.372182       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0524 19:27:26.372199       1 shared_informer.go:318] Caches are synced for service config
	I0524 19:27:26.374992       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [aa3ebd9473a9] <==
	* I0524 19:39:53.008037       1 node.go:141] Successfully retrieved node IP: 172.27.143.236
	I0524 19:39:53.008226       1 server_others.go:110] "Detected node IP" address="172.27.143.236"
	I0524 19:39:53.008366       1 server_others.go:551] "Using iptables proxy"
	I0524 19:39:53.163849       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 19:39:53.163884       1 server_others.go:190] "Using iptables Proxier"
	I0524 19:39:53.166825       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 19:39:53.177318       1 server.go:657] "Version info" version="v1.27.2"
	I0524 19:39:53.177660       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 19:39:53.181975       1 config.go:97] "Starting endpoint slice config controller"
	I0524 19:39:53.184130       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 19:39:53.184381       1 config.go:188] "Starting service config controller"
	I0524 19:39:53.184527       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 19:39:53.188071       1 config.go:315] "Starting node config controller"
	I0524 19:39:53.188146       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 19:39:53.285514       1 shared_informer.go:318] Caches are synced for service config
	I0524 19:39:53.285741       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0524 19:39:53.290737       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [1fa5d98da31a] <==
	* I0524 19:39:46.138500       1 serving.go:348] Generated self-signed cert in-memory
	W0524 19:39:48.980947       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0524 19:39:48.981194       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 19:39:48.981472       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0524 19:39:48.981619       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0524 19:39:49.051407       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0524 19:39:49.051514       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 19:39:49.060219       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0524 19:39:49.061388       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0524 19:39:49.061411       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0524 19:39:49.073784       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0524 19:39:49.174345       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [bde0fe1b2458] <==
	* W0524 19:27:09.083618       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0524 19:27:09.083659       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0524 19:27:09.104598       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 19:27:09.104788       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 19:27:09.185324       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0524 19:27:09.185376       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0524 19:27:09.196851       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0524 19:27:09.196878       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0524 19:27:09.237811       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0524 19:27:09.238154       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0524 19:27:09.315037       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 19:27:09.315578       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0524 19:27:09.336810       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 19:27:09.336941       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 19:27:09.408709       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0524 19:27:09.408756       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0524 19:27:09.459521       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 19:27:09.459635       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 19:27:09.569006       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0524 19:27:09.569231       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0524 19:27:10.543594       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0524 19:37:36.464781       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0524 19:37:36.464977       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0524 19:37:36.473008       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0524 19:37:36.473071       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 19:38:53 UTC, ends at Wed 2023-05-24 19:43:20 UTC. --
	May 24 19:39:59 multinode-237000 kubelet[1441]: E0524 19:39:59.038002    1441 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-qhx48" podUID=12d04c63-9898-4ccf-9e6d-92d8f3d086a4
	May 24 19:39:59 multinode-237000 kubelet[1441]: E0524 19:39:59.038399    1441 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-67b7f59bb-9t5bp" podUID=57289db9-2a89-4cb2-b073-88d539b07054
	May 24 19:40:01 multinode-237000 kubelet[1441]: E0524 19:40:01.038697    1441 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-67b7f59bb-9t5bp" podUID=57289db9-2a89-4cb2-b073-88d539b07054
	May 24 19:40:01 multinode-237000 kubelet[1441]: E0524 19:40:01.039908    1441 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-qhx48" podUID=12d04c63-9898-4ccf-9e6d-92d8f3d086a4
	May 24 19:40:01 multinode-237000 kubelet[1441]: I0524 19:40:01.484482    1441 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	May 24 19:40:07 multinode-237000 kubelet[1441]: I0524 19:40:07.192506    1441 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c36a90d64f44cbe3e7b813b0552fe87edc4244f7d08f14a5fe638740b5385cd"
	May 24 19:40:07 multinode-237000 kubelet[1441]: I0524 19:40:07.206928    1441 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad7642112aa94466071453e6f567b007ce3c20bbaa66a5bec8a675b507ebbf5a"
	May 24 19:40:23 multinode-237000 kubelet[1441]: I0524 19:40:23.554533    1441 scope.go:115] "RemoveContainer" containerID="8b4ccab3df53d49062bba7fa20be79830f07303d59bcdf048614dc7a912388fa"
	May 24 19:40:23 multinode-237000 kubelet[1441]: I0524 19:40:23.554916    1441 scope.go:115] "RemoveContainer" containerID="777f8c6ebde34191b4cc66bb27d99f75fc5dc837353e1b61c2d7810a04d2a1f6"
	May 24 19:40:23 multinode-237000 kubelet[1441]: E0524 19:40:23.555206    1441 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(6498131a-f2e2-4098-9a5f-6c277fae3747)\"" pod="kube-system/storage-provisioner" podUID=6498131a-f2e2-4098-9a5f-6c277fae3747
	May 24 19:40:35 multinode-237000 kubelet[1441]: I0524 19:40:35.040077    1441 scope.go:115] "RemoveContainer" containerID="777f8c6ebde34191b4cc66bb27d99f75fc5dc837353e1b61c2d7810a04d2a1f6"
	May 24 19:40:41 multinode-237000 kubelet[1441]: I0524 19:40:41.043838    1441 scope.go:115] "RemoveContainer" containerID="7589cfe30be6d3cb099a085c1f179fcd49fd25641b84d6e0195f651b0c18fad8"
	May 24 19:40:41 multinode-237000 kubelet[1441]: E0524 19:40:41.075599    1441 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:40:41 multinode-237000 kubelet[1441]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:40:41 multinode-237000 kubelet[1441]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:40:41 multinode-237000 kubelet[1441]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:40:41 multinode-237000 kubelet[1441]: I0524 19:40:41.100643    1441 scope.go:115] "RemoveContainer" containerID="30b43ae6055b8e52934aa736ff06f16afb2a355cc7363194ecbc4d3d7c73baff"
	May 24 19:41:41 multinode-237000 kubelet[1441]: E0524 19:41:41.076370    1441 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:41:41 multinode-237000 kubelet[1441]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:41:41 multinode-237000 kubelet[1441]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:41:41 multinode-237000 kubelet[1441]:  > table=nat chain=KUBE-KUBELET-CANARY
	May 24 19:42:41 multinode-237000 kubelet[1441]: E0524 19:42:41.075297    1441 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 19:42:41 multinode-237000 kubelet[1441]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 19:42:41 multinode-237000 kubelet[1441]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 19:42:41 multinode-237000 kubelet[1441]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-237000 -n multinode-237000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-237000 -n multinode-237000: (5.1130821s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-237000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (358.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (383.47s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.3651177178.exe start -p running-upgrade-893100 --memory=2200 --vm-driver=hyperv
E0524 20:02:16.691171    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:132: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.3651177178.exe start -p running-upgrade-893100 --memory=2200 --vm-driver=hyperv: (3m13.2329635s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-893100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0524 20:05:08.988098    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-893100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (2m13.5600942s)

                                                
                                                
-- stdout --
	* [running-upgrade-893100] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the hyperv driver based on existing profile
	* Starting control plane node running-upgrade-893100 in cluster running-upgrade-893100
	* Updating the running hyperv "running-upgrade-893100" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 20:04:40.108028   10868 out.go:296] Setting OutFile to fd 1592 ...
	I0524 20:04:40.187964   10868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:04:40.187964   10868 out.go:309] Setting ErrFile to fd 1604...
	I0524 20:04:40.187964   10868 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:04:40.212855   10868 out.go:303] Setting JSON to false
	I0524 20:04:40.217846   10868 start.go:125] hostinfo: {"hostname":"minikube1","uptime":7193,"bootTime":1684951486,"procs":164,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 20:04:40.217937   10868 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 20:04:40.222498   10868 out.go:177] * [running-upgrade-893100] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 20:04:40.225385   10868 notify.go:220] Checking for updates...
	I0524 20:04:40.227787   10868 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:04:40.233081   10868 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 20:04:40.235538   10868 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 20:04:40.242754   10868 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 20:04:40.246476   10868 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 20:04:40.251034   10868 config.go:182] Loaded profile config "running-upgrade-893100": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0524 20:04:40.251034   10868 start_flags.go:683] config upgrade: Driver=hyperv
	I0524 20:04:40.251034   10868 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de
	I0524 20:04:40.251034   10868 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\running-upgrade-893100\config.json ...
	I0524 20:04:40.256793   10868 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0524 20:04:40.258769   10868 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 20:04:42.224514   10868 out.go:177] * Using the hyperv driver based on existing profile
	I0524 20:04:42.227504   10868 start.go:295] selected driver: hyperv
	I0524 20:04:42.227504   10868 start.go:870] validating driver "hyperv" against &{Name:running-upgrade-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.27.134.82 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}

                                                
                                                
	I0524 20:04:42.228547   10868 start.go:881] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 20:04:42.282303   10868 cni.go:84] Creating CNI manager for ""
	I0524 20:04:42.282303   10868 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 20:04:42.282303   10868 start_flags.go:319] config:
	{Name:running-upgrade-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.27.134.82 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:04:42.283132   10868 iso.go:125] acquiring lock: {Name:mk3b29db369ab0f922ac5eeb788beee87e18ec94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:04:42.286909   10868 out.go:177] * Starting control plane node running-upgrade-893100 in cluster running-upgrade-893100
	I0524 20:04:42.288904   10868 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0524 20:04:42.327662   10868 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0524 20:04:42.328891   10868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0524 20:04:42.328891   10868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0524 20:04:42.328891   10868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I0524 20:04:42.328891   10868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0524 20:04:42.328891   10868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0524 20:04:42.328891   10868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0524 20:04:42.328891   10868 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\running-upgrade-893100\config.json ...
	I0524 20:04:42.328891   10868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0524 20:04:42.329030   10868 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0524 20:04:42.333534   10868 cache.go:195] Successfully downloaded all kic artifacts
	I0524 20:04:42.334529   10868 start.go:364] acquiring machines lock for running-upgrade-893100: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 20:04:42.521171   10868 cache.go:107] acquiring lock: {Name:mk4e8ee16ba5b475b341c78282e92381b8584a70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:04:42.522169   10868 cache.go:107] acquiring lock: {Name:mkf253ced278c18e0b579f9f5e07f6a2fe7db678 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:04:42.522169   10868 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0524 20:04:42.522169   10868 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0524 20:04:42.523173   10868 cache.go:107] acquiring lock: {Name:mk67b634fe9a890edc5195da54a2f3093e0c8f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:04:42.523173   10868 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0524 20:04:42.523173   10868 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 193.876ms
	I0524 20:04:42.523173   10868 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0524 20:04:42.531171   10868 cache.go:107] acquiring lock: {Name:mkbbc88bc55edd0ef8bd1c53673fe74e0129caa1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:04:42.531171   10868 cache.go:107] acquiring lock: {Name:mka7be082bbc64a256cc388eda31b6c9edba386f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:04:42.531171   10868 cache.go:107] acquiring lock: {Name:mk69342e4f48cfcf5669830048d73215a892bfa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:04:42.532267   10868 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0524 20:04:42.532267   10868 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0524 20:04:42.532267   10868 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0524 20:04:42.532267   10868 cache.go:107] acquiring lock: {Name:mkcd99a49ef11cbbf53d95904dadb7eadb7e30f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:04:42.532267   10868 cache.go:107] acquiring lock: {Name:mk7a50c4bf2c20bec1fff9de3ac74780139c1c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:04:42.534205   10868 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0524 20:04:42.534205   10868 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0524 20:04:42.560193   10868 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0524 20:04:42.563198   10868 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0524 20:04:42.567189   10868 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0524 20:04:42.576200   10868 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0524 20:04:42.577197   10868 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0524 20:04:42.578214   10868 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0524 20:04:42.580200   10868 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	W0524 20:04:42.704163   10868 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0524 20:04:42.812786   10868 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0524 20:04:42.930666   10868 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0524 20:04:43.063080   10868 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0524 20:04:43.182296   10868 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0524 20:04:43.236299   10868 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0524 20:04:43.244314   10868 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0524 20:04:43.281859   10868 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	W0524 20:04:43.298271   10868 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0524 20:04:43.357867   10868 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0524 20:04:43.415232   10868 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	W0524 20:04:43.416553   10868 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0524 20:04:43.557592   10868 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I0524 20:04:43.557592   10868 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 1.2284263s
	I0524 20:04:43.557592   10868 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I0524 20:04:43.566583   10868 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0524 20:04:43.630543   10868 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I0524 20:04:44.331068   10868 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I0524 20:04:44.331671   10868 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 2.002781s
	I0524 20:04:44.331671   10868 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I0524 20:04:44.533924   10868 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I0524 20:04:44.534921   10868 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 2.2058926s
	I0524 20:04:44.534921   10868 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I0524 20:04:44.620931   10868 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I0524 20:04:44.620931   10868 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 2.2914462s
	I0524 20:04:44.620931   10868 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I0524 20:04:44.644924   10868 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I0524 20:04:44.644924   10868 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 2.3158957s
	I0524 20:04:44.644924   10868 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I0524 20:04:45.036550   10868 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I0524 20:04:45.036550   10868 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 2.7073851s
	I0524 20:04:45.036550   10868 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I0524 20:04:45.880739   10868 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I0524 20:04:45.880739   10868 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 3.5518505s
	I0524 20:04:45.881742   10868 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I0524 20:04:45.881742   10868 cache.go:87] Successfully saved all images to host disk.
	I0524 20:05:52.741024   10868 start.go:368] acquired machines lock for "running-upgrade-893100" in 1m10.4065333s
	I0524 20:05:52.741024   10868 start.go:96] Skipping create...Using existing machine configuration
	I0524 20:05:52.741024   10868 fix.go:55] fixHost starting: minikube
	I0524 20:05:52.741797   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:05:53.588971   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:53.589034   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:53.589110   10868 fix.go:103] recreateIfNeeded on running-upgrade-893100: state=Running err=<nil>
	W0524 20:05:53.589110   10868 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 20:05:53.943072   10868 out.go:177] * Updating the running hyperv "running-upgrade-893100" VM ...
	I0524 20:05:54.382306   10868 machine.go:88] provisioning docker machine ...
	I0524 20:05:54.383451   10868 buildroot.go:166] provisioning hostname "running-upgrade-893100"
	I0524 20:05:54.383559   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:05:55.250505   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:55.412731   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:55.412731   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:56.711584   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:05:56.711584   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:56.715753   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:05:56.716915   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:05:56.716915   10868 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-893100 && echo "running-upgrade-893100" | sudo tee /etc/hostname
	I0524 20:05:56.900973   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-893100
	
	I0524 20:05:56.901042   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:05:57.759287   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:57.759287   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:57.759363   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:58.953124   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:05:58.953124   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:58.958259   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:05:58.959259   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:05:58.959328   10868 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-893100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-893100/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-893100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 20:05:59.136975   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 20:05:59.137050   10868 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0524 20:05:59.137050   10868 buildroot.go:174] setting up certificates
	I0524 20:05:59.137050   10868 provision.go:83] configureAuth start
	I0524 20:05:59.137050   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:00.023500   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:00.023500   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:00.023500   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:02.233377   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:02.233557   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:02.233557   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:03.452998   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:03.452998   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:03.452998   10868 provision.go:138] copyHostCerts
	I0524 20:06:03.452998   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 20:06:03.453986   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 20:06:03.453986   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 20:06:03.455998   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 20:06:03.455998   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 20:06:03.455998   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 20:06:03.456992   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 20:06:03.456992   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 20:06:03.457996   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 20:06:03.459004   10868 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-893100 san=[172.27.134.82 172.27.134.82 localhost 127.0.0.1 minikube running-upgrade-893100]
	I0524 20:06:03.728326   10868 provision.go:172] copyRemoteCerts
	I0524 20:06:03.737398   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 20:06:03.737398   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:04.586035   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:04.586035   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:04.586128   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:05.796410   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:05.796598   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:05.797003   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:05.917620   10868 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.1801s)
	I0524 20:06:05.918104   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 20:06:05.946671   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0524 20:06:05.972632   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0524 20:06:05.998139   10868 provision.go:86] duration metric: configureAuth took 6.861093s
	I0524 20:06:05.998139   10868 buildroot.go:189] setting minikube options for container-runtime
	I0524 20:06:05.998139   10868 config.go:182] Loaded profile config "running-upgrade-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0524 20:06:05.998139   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:08.003023   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:08.003096   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:08.008201   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:08.008892   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:08.009428   10868 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 20:06:08.198102   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 20:06:08.198102   10868 buildroot.go:70] root file system type: tmpfs
	I0524 20:06:08.198398   10868 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 20:06:08.198501   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:10.225770   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:10.225770   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:10.229774   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:10.229774   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:10.229774   10868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 20:06:10.399947   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 20:06:10.399947   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:12.467458   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:12.467458   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:12.471467   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:12.472454   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:12.472454   10868 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 20:06:26.560253   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 20:06:26.560426   10868 machine.go:91] provisioned docker machine in 32.1769885s
	I0524 20:06:26.560426   10868 start.go:300] post-start starting for "running-upgrade-893100" (driver="hyperv")
	I0524 20:06:26.560426   10868 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 20:06:26.575492   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 20:06:26.575492   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:28.838356   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:28.838356   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:28.838933   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:28.959761   10868 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.3841871s)
	I0524 20:06:28.972843   10868 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 20:06:28.980533   10868 info.go:137] Remote host: Buildroot 2019.02.7
	I0524 20:06:28.980613   10868 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 20:06:28.980976   10868 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 20:06:28.982117   10868 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 20:06:28.995954   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 20:06:29.013022   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 20:06:29.050228   10868 start.go:303] post-start completed in 2.4898051s
	I0524 20:06:29.050228   10868 fix.go:57] fixHost completed within 36.3092251s
	I0524 20:06:29.050228   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:29.975618   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:29.975688   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:29.975688   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:31.397968   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:31.398200   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:31.402578   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:31.403192   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:31.403192   10868 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0524 20:06:31.691038   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684958791.670023101
	
	I0524 20:06:31.691103   10868 fix.go:207] guest clock: 1684958791.670023101
	I0524 20:06:31.691103   10868 fix.go:220] Guest: 2023-05-24 20:06:31.670023101 +0000 UTC Remote: 2023-05-24 20:06:29.0502283 +0000 UTC m=+109.032679301 (delta=2.619794801s)
	I0524 20:06:31.691177   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:32.544937   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:32.545011   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:32.545381   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:33.789345   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:33.789403   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:33.795876   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:33.796952   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:33.797016   10868 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1684958791
	I0524 20:06:33.971448   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May 24 20:06:31 UTC 2023
	
	I0524 20:06:33.971448   10868 fix.go:227] clock set: Wed May 24 20:06:31 UTC 2023
	 (err=<nil>)
	I0524 20:06:33.971448   10868 start.go:83] releasing machines lock for "running-upgrade-893100", held for 41.2304529s
	I0524 20:06:33.972428   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:34.889266   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:34.889266   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:34.889266   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:36.338371   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:36.338619   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:36.342252   10868 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 20:06:36.343221   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:36.353265   10868 ssh_runner.go:195] Run: cat /version.json
	I0524 20:06:36.353265   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:37.368370   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:37.368370   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:37.368370   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:37.406338   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:37.406338   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:37.406338   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:38.901480   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:38.901480   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:38.902494   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:38.996335   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:38.996335   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:38.996335   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:39.034420   10868 ssh_runner.go:235] Completed: cat /version.json: (2.6811586s)
	W0524 20:06:39.034420   10868 start.go:409] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0524 20:06:39.048577   10868 ssh_runner.go:195] Run: systemctl --version
	I0524 20:06:39.087442   10868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 20:06:39.172435   10868 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 20:06:39.172435   10868 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.8301879s)
	I0524 20:06:39.190434   10868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0524 20:06:39.221212   10868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0524 20:06:39.230789   10868 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0524 20:06:39.230789   10868 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0524 20:06:39.230789   10868 start.go:481] detecting cgroup driver to use...
	I0524 20:06:39.231791   10868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:06:39.256768   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0524 20:06:39.278742   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 20:06:39.290371   10868 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 20:06:39.306364   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 20:06:39.341273   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:06:39.366461   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 20:06:39.395843   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:06:39.431834   10868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 20:06:39.469460   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 20:06:39.528350   10868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 20:06:39.559807   10868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 20:06:39.583690   10868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:39.922278   10868 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 20:06:39.960021   10868 start.go:481] detecting cgroup driver to use...
	I0524 20:06:39.978881   10868 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 20:06:40.010550   10868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:06:40.036238   10868 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 20:06:40.110050   10868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:06:40.139694   10868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:06:40.157646   10868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:06:40.187302   10868 ssh_runner.go:195] Run: which cri-dockerd
	I0524 20:06:40.203647   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 20:06:40.217401   10868 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 20:06:40.246006   10868 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 20:06:40.528783   10868 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 20:06:40.824730   10868 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 20:06:40.824730   10868 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 20:06:40.859463   10868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:41.276580   10868 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 20:06:53.489976   10868 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.2134033s)
	I0524 20:06:53.492790   10868 out.go:177] 
	W0524 20:06:53.495379   10868 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0524 20:06:53.495379   10868 out.go:239] * 
	* 
	W0524 20:06:53.497289   10868 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 20:06:53.499747   10868 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-893100 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-05-24 20:06:53.5811427 +0000 UTC m=+5235.118932101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-893100 -n running-upgrade-893100
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-893100 -n running-upgrade-893100: exit status 6 (5.9568968s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0524 20:06:59.478064    5456 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-893100" does not appear in C:\Users\jenkins.minikube1\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-893100" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-893100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-893100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-893100: (49.7074819s)
--- FAIL: TestRunningBinaryUpgrade (383.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (317.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-893100 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-893100 --driver=hyperv: exit status 1 (4m59.7397442s)

                                                
                                                
-- stdout --
	* [NoKubernetes-893100] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting control plane node NoKubernetes-893100 in cluster NoKubernetes-893100
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-893100 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-893100 -n NoKubernetes-893100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-893100 -n NoKubernetes-893100: (5.6870212s)
helpers_test.go:244: <<< TestNoKubernetes/serial/StartWithK8s FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-893100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-893100 logs -n 25: (5.1133196s)
helpers_test.go:252: TestNoKubernetes/serial/StartWithK8s logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| node    | list -p multinode-237000       | multinode-237000          | minikube1\jenkins | v1.30.1 | 24 May 23 19:48 UTC |                     |
	| start   | -p multinode-237000-m02        | multinode-237000-m02      | minikube1\jenkins | v1.30.1 | 24 May 23 19:48 UTC |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p multinode-237000-m03        | multinode-237000-m03      | minikube1\jenkins | v1.30.1 | 24 May 23 19:48 UTC | 24 May 23 19:50 UTC |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| node    | add -p multinode-237000        | multinode-237000          | minikube1\jenkins | v1.30.1 | 24 May 23 19:50 UTC |                     |
	| delete  | -p multinode-237000-m03        | multinode-237000-m03      | minikube1\jenkins | v1.30.1 | 24 May 23 19:50 UTC | 24 May 23 19:50 UTC |
	| delete  | -p multinode-237000            | multinode-237000          | minikube1\jenkins | v1.30.1 | 24 May 23 19:51 UTC | 24 May 23 19:51 UTC |
	| start   | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:51 UTC | 24 May 23 19:55 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr              |                           |                   |         |                     |                     |
	|         | --wait=true --preload=false    |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                           |                   |         |                     |                     |
	| ssh     | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:55 UTC | 24 May 23 19:55 UTC |
	|         | -- docker pull                 |                           |                   |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox    |                           |                   |         |                     |                     |
	| stop    | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:55 UTC | 24 May 23 19:55 UTC |
	| start   | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:55 UTC | 24 May 23 19:57 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --wait=true --driver=hyperv    |                           |                   |         |                     |                     |
	| ssh     | -p test-preload-134100 --      | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:57 UTC | 24 May 23 19:57 UTC |
	|         | docker images                  |                           |                   |         |                     |                     |
	| delete  | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:57 UTC | 24 May 23 19:57 UTC |
	| start   | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:57 UTC | 24 May 23 19:59 UTC |
	|         | --memory=2048 --driver=hyperv  |                           |                   |         |                     |                     |
	| stop    | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:59 UTC | 24 May 23 19:59 UTC |
	|         | --schedule 5m                  |                           |                   |         |                     |                     |
	| ssh     | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:59 UTC | 24 May 23 19:59 UTC |
	|         | -- sudo systemctl show         |                           |                   |         |                     |                     |
	|         | minikube-scheduled-stop        |                           |                   |         |                     |                     |
	|         | --no-page                      |                           |                   |         |                     |                     |
	| stop    | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:59 UTC | 24 May 23 20:00 UTC |
	|         | --schedule 5s                  |                           |                   |         |                     |                     |
	| delete  | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC | 24 May 23 20:01 UTC |
	| start   | -p offline-docker-893100       | offline-docker-893100     | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC | 24 May 23 20:05 UTC |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --memory=2048 --wait=true      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p pause-893100 --memory=2048  | pause-893100              | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC | 24 May 23 20:03 UTC |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv     |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-893100         | NoKubernetes-893100       | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC |                     |
	|         | --no-kubernetes                |                           |                   |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-893100         | NoKubernetes-893100       | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p pause-893100                | pause-893100              | minikube1\jenkins | v1.30.1 | 24 May 23 20:03 UTC |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-893100      | running-upgrade-893100    | minikube1\jenkins | v1.30.1 | 24 May 23 20:04 UTC |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p offline-docker-893100       | offline-docker-893100     | minikube1\jenkins | v1.30.1 | 24 May 23 20:05 UTC | 24 May 23 20:06 UTC |
	| start   | -p force-systemd-flag-052200   | force-systemd-flag-052200 | minikube1\jenkins | v1.30.1 | 24 May 23 20:06 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 20:06:00
	Running on machine: minikube1
	Binary: Built with gc go1.20.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 20:06:00.194871    7000 out.go:296] Setting OutFile to fd 1632 ...
	I0524 20:06:00.277864    7000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:06:00.277864    7000 out.go:309] Setting ErrFile to fd 1636...
	I0524 20:06:00.277864    7000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:06:00.301877    7000 out.go:303] Setting JSON to false
	I0524 20:06:00.305881    7000 start.go:125] hostinfo: {"hostname":"minikube1","uptime":7273,"bootTime":1684951486,"procs":160,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 20:06:00.305881    7000 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 20:06:00.310903    7000 out.go:177] * [force-systemd-flag-052200] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 20:06:00.313898    7000 notify.go:220] Checking for updates...
	I0524 20:06:00.315886    7000 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:06:00.318874    7000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 20:06:00.322896    7000 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 20:06:00.328610    7000 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 20:06:00.332277    7000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 20:05:59.168692    4556 ssh_runner.go:235] Completed: sudo systemctl restart docker: (19.7086593s)
	I0524 20:05:59.168692    4556 start.go:481] detecting cgroup driver to use...
	I0524 20:05:59.168692    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:05:59.217719    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 20:05:59.258739    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 20:05:59.280729    4556 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 20:05:59.289804    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 20:05:59.323853    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:05:59.359818    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 20:05:59.391294    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:05:59.422332    4556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 20:05:59.453292    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 20:05:59.487316    4556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 20:05:59.514274    4556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 20:05:59.541947    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:05:59.734524    4556 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 20:05:59.766734    4556 start.go:481] detecting cgroup driver to use...
	I0524 20:05:59.780482    4556 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 20:05:59.812886    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:05:59.846891    4556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 20:05:59.880996    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:05:59.915545    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:05:59.951453    4556 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 20:06:00.025439    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:06:00.053617    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:06:00.125621    4556 ssh_runner.go:195] Run: which cri-dockerd
	I0524 20:06:00.150623    4556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 20:06:00.170069    4556 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 20:06:00.228863    4556 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 20:06:00.484236    4556 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 20:06:00.699121    4556 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 20:06:00.699121    4556 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 20:06:00.749088    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:00.936425    4556 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 20:06:00.337717    7000 config.go:182] Loaded profile config "NoKubernetes-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:00.339401    7000 config.go:182] Loaded profile config "pause-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:00.340198    7000 config.go:182] Loaded profile config "running-upgrade-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0524 20:06:00.340198    7000 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 20:06:02.236727    7000 out.go:177] * Using the hyperv driver based on user configuration
	I0524 20:06:02.241752    7000 start.go:295] selected driver: hyperv
	I0524 20:06:02.241752    7000 start.go:870] validating driver "hyperv" against <nil>
	I0524 20:06:02.241752    7000 start.go:881] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 20:06:02.308741    7000 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 20:06:02.309722    7000 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 20:06:02.309722    7000 cni.go:84] Creating CNI manager for ""
	I0524 20:06:02.309722    7000 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:02.309722    7000 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 20:06:02.309722    7000 start_flags.go:319] config:
	{Name:force-systemd-flag-052200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-052200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:06:02.310730    7000 iso.go:125] acquiring lock: {Name:mk3b29db369ab0f922ac5eeb788beee87e18ec94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:06:02.314735    7000 out.go:177] * Starting control plane node force-systemd-flag-052200 in cluster force-systemd-flag-052200
	I0524 20:06:02.953206    4556 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.0167823s)
	I0524 20:06:02.963444    4556 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:03.157701    4556 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 20:06:03.353266    4556 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:03.569588    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:03.748952    4556 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 20:06:03.798296    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:03.976157    4556 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 20:06:04.098802    4556 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 20:06:04.111837    4556 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 20:06:04.121845    4556 start.go:549] Will wait 60s for crictl version
	I0524 20:06:04.130849    4556 ssh_runner.go:195] Run: which crictl
	I0524 20:06:04.150684    4556 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 20:06:04.220851    4556 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 20:06:04.229226    4556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:04.282631    4556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:02.233377   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:02.233557   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:02.233557   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:03.452998   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:03.452998   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:03.452998   10868 provision.go:138] copyHostCerts
	I0524 20:06:03.452998   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 20:06:03.453986   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 20:06:03.453986   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 20:06:03.455998   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 20:06:03.455998   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 20:06:03.455998   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 20:06:03.456992   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 20:06:03.456992   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 20:06:03.457996   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 20:06:03.459004   10868 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-893100 san=[172.27.134.82 172.27.134.82 localhost 127.0.0.1 minikube running-upgrade-893100]
	I0524 20:06:03.728326   10868 provision.go:172] copyRemoteCerts
	I0524 20:06:03.737398   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 20:06:03.737398   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:04.586035   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:04.586035   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:04.586128   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:02.321733    7000 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:06:02.321733    7000 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0524 20:06:02.321733    7000 cache.go:57] Caching tarball of preloaded images
	I0524 20:06:02.322748    7000 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0524 20:06:02.322748    7000 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 20:06:02.322748    7000 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-052200\config.json ...
	I0524 20:06:02.322748    7000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-052200\config.json: {Name:mka0a0923dabc11ea4915f2cdd814ce71e98be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:02.324750    7000 cache.go:195] Successfully downloaded all kic artifacts
	I0524 20:06:02.324750    7000 start.go:364] acquiring machines lock for force-systemd-flag-052200: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 20:06:04.337415    4556 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 20:06:04.337943    4556 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 20:06:04.345961    4556 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 20:06:04.345961    4556 ip.go:210] interface addr: 172.27.128.1/20
	I0524 20:06:04.355071    4556 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 20:06:04.361757    4556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 20:06:04.383636    4556 localpath.go:92] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\client.crt -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\client.crt
	I0524 20:06:04.385029    4556 localpath.go:117] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\client.key -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\client.key
	I0524 20:06:04.386571    4556 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:06:04.392558    4556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:04.429414    4556 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:04.429414    4556 docker.go:563] Images already preloaded, skipping extraction
	I0524 20:06:04.436477    4556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:04.476086    4556 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:04.476086    4556 cache_images.go:84] Images are preloaded, skipping loading
	I0524 20:06:04.482629    4556 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 20:06:04.532626    4556 cni.go:84] Creating CNI manager for ""
	I0524 20:06:04.532626    4556 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:04.532626    4556 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 20:06:04.532626    4556 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.134.18 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-893100 NodeName:NoKubernetes-893100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.134.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.134.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 20:06:04.532626    4556 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.134.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "NoKubernetes-893100"
	  kubeletExtraArgs:
	    node-ip: 172.27.134.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.134.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 20:06:04.532626    4556 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=NoKubernetes-893100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.134.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:NoKubernetes-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 20:06:04.541620    4556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 20:06:04.568761    4556 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 20:06:04.581711    4556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 20:06:04.605115    4556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0524 20:06:04.638932    4556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 20:06:04.671524    4556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0524 20:06:04.718006    4556 ssh_runner.go:195] Run: grep 172.27.134.18	control-plane.minikube.internal$ /etc/hosts
	I0524 20:06:04.724023    4556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.134.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 20:06:04.746376    4556 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100 for IP: 172.27.134.18
	I0524 20:06:04.746462    4556 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:04.747226    4556 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 20:06:04.747373    4556 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 20:06:04.748219    4556 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\client.key
	I0524 20:06:04.748219    4556 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56
	I0524 20:06:04.748755    4556 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56 with IP's: [172.27.134.18 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 20:06:04.971535    4556 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56 ...
	I0524 20:06:04.972539    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56: {Name:mk3560aeed00029897190182186ed8cda7ba9211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:04.973603    4556 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56 ...
	I0524 20:06:04.973603    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56: {Name:mk0dcda055aab9733580bdf04f9905181c59f6fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:04.974581    4556 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt
	I0524 20:06:04.986543    4556 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key
	I0524 20:06:04.987543    4556 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key
	I0524 20:06:04.987543    4556 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt with IP's: []
	I0524 20:06:05.209022    4556 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt ...
	I0524 20:06:05.209022    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt: {Name:mk855573f394b139659b125b2169fcb2c42c1cda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:05.210021    4556 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key ...
	I0524 20:06:05.210021    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key: {Name:mkf5e64627dd020f5c501fa0f12c3043f4dd0c20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 20:06:05.222128    4556 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 20:06:05.223128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 20:06:05.223128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 20:06:05.224875    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 20:06:05.272003    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 20:06:05.319493    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 20:06:05.370453    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 20:06:05.417021    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 20:06:05.457401    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 20:06:05.500182    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 20:06:05.544310    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 20:06:05.591935    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 20:06:05.638549    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 20:06:05.680340    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 20:06:05.726205    4556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 20:06:05.767423    4556 ssh_runner.go:195] Run: openssl version
	I0524 20:06:05.784749    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 20:06:05.813869    4556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:05.821773    4556 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:05.833861    4556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:05.850853    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 20:06:05.878304    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 20:06:05.906297    4556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 20:06:05.914309    4556 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 20:06:05.925710    4556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 20:06:05.943582    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 20:06:05.975648    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 20:06:06.015170    4556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 20:06:06.023978    4556 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 20:06:06.035850    4556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 20:06:06.056997    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 20:06:06.094832    4556 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 20:06:06.103406    4556 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 20:06:06.104848    4556 kubeadm.go:404] StartCluster: {Name:NoKubernetes-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.27.2 ClusterName:NoKubernetes-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.134.18 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:06:06.114867    4556 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:06.158802    4556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 20:06:06.188512    4556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:06.213818    4556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 20:06:06.237285    4556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 20:06:06.237285    4556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 20:06:07.025821   10012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.9420978s)
	I0524 20:06:07.035631   10012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:07.286145   10012 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 20:06:07.528622   10012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:07.847079   10012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:08.145384   10012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 20:06:08.218382   10012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:08.594082   10012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 20:06:08.952592   10012 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 20:06:08.962588   10012 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 20:06:08.978277   10012 start.go:549] Will wait 60s for crictl version
	I0524 20:06:08.991301   10012 ssh_runner.go:195] Run: which crictl
	I0524 20:06:09.010872   10012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 20:06:09.158427   10012 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 20:06:09.167406   10012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:09.235215   10012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:05.796410   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:05.796598   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:05.797003   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:05.917620   10868 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.1801s)
	I0524 20:06:05.918104   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 20:06:05.946671   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0524 20:06:05.972632   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0524 20:06:05.998139   10868 provision.go:86] duration metric: configureAuth took 6.861093s
	I0524 20:06:05.998139   10868 buildroot.go:189] setting minikube options for container-runtime
	I0524 20:06:05.998139   10868 config.go:182] Loaded profile config "running-upgrade-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0524 20:06:05.998139   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:08.003023   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:08.003096   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:08.008201   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:08.008892   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:08.009428   10868 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 20:06:08.198102   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 20:06:08.198102   10868 buildroot.go:70] root file system type: tmpfs
	I0524 20:06:08.198398   10868 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 20:06:08.198501   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:09.293679   10012 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 20:06:09.293679   10012 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 20:06:09.303152   10012 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 20:06:09.304149   10012 ip.go:210] interface addr: 172.27.128.1/20
	I0524 20:06:09.316137   10012 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 20:06:09.324681   10012 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:06:09.332631   10012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:09.369439   10012 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:09.369531   10012 docker.go:563] Images already preloaded, skipping extraction
	I0524 20:06:09.378738   10012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:09.416677   10012 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:09.416752   10012 cache_images.go:84] Images are preloaded, skipping loading
	I0524 20:06:09.424464   10012 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 20:06:09.475847   10012 cni.go:84] Creating CNI manager for ""
	I0524 20:06:09.475914   10012 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:09.475914   10012 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 20:06:09.475982   10012 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.136.175 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-893100 NodeName:pause-893100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.136.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.136.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 20:06:09.476249   10012 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.136.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-893100"
	  kubeletExtraArgs:
	    node-ip: 172.27.136.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.136.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 20:06:09.476452   10012 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-893100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.136.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:pause-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 20:06:09.485138   10012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 20:06:09.504201   10012 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 20:06:09.513369   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 20:06:09.532371   10012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0524 20:06:09.564859   10012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 20:06:09.594750   10012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0524 20:06:09.633134   10012 ssh_runner.go:195] Run: grep 172.27.136.175	control-plane.minikube.internal$ /etc/hosts
	I0524 20:06:09.645572   10012 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100 for IP: 172.27.136.175
	I0524 20:06:09.645572   10012 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:09.646787   10012 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 20:06:09.647146   10012 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 20:06:09.648111   10012 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\client.key
	I0524 20:06:09.648492   10012 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.key.85da34c2
	I0524 20:06:09.648994   10012 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.key
	I0524 20:06:09.650350   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 20:06:09.650774   10012 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 20:06:09.650774   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 20:06:09.651481   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 20:06:09.651481   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 20:06:09.652160   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 20:06:09.652338   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 20:06:09.654026   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 20:06:09.697087   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 20:06:09.743829   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 20:06:09.789241   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 20:06:09.832568   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 20:06:09.878635   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 20:06:09.922855   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 20:06:09.967076   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 20:06:10.011306   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 20:06:10.054388   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 20:06:10.101750   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 20:06:10.152071   10012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 20:06:06.507025    4556 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 20:06:06.508025    4556 kubeadm.go:322] W0524 20:06:06.503827    1529 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 20:06:10.225770   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:10.225770   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:10.229774   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:10.229774   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:10.229774   10868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 20:06:10.399947   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 20:06:10.399947   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:12.467458   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:12.467458   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:12.471467   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:12.472454   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:12.472454   10868 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 20:06:10.211751   10012 ssh_runner.go:195] Run: openssl version
	I0524 20:06:10.229774   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 20:06:10.266275   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.277278   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.287269   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.310865   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 20:06:10.340942   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 20:06:10.366956   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.375774   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.384953   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.405947   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 20:06:10.444116   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 20:06:10.474716   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.482279   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.492728   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.512758   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 20:06:10.583724   10012 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 20:06:10.605736   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0524 20:06:10.624468   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0524 20:06:10.643480   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0524 20:06:10.669632   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0524 20:06:10.689627   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0524 20:06:10.708343   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0524 20:06:10.717465   10012 kubeadm.go:404] StartCluster: {Name:pause-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:pause-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.136.175 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-
security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:06:10.726272   10012 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:10.777160   10012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 20:06:10.801082   10012 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0524 20:06:10.801082   10012 kubeadm.go:636] restartCluster start
	I0524 20:06:10.811686   10012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0524 20:06:10.843193   10012 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:10.843880   10012 kubeconfig.go:92] found "pause-893100" server: "https://172.27.136.175:8443"
	I0524 20:06:10.845890   10012 kapi.go:59] client config for pause-893100: &rest.Config{Host:"https://172.27.136.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 20:06:10.856500   10012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0524 20:06:10.883825   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:10.893143   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:10.920443   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.421350   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:11.430355   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:11.451625   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.929976   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:11.939775   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:11.970377   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:12.436501   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:12.447814   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:12.471467   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:12.926501   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:12.937214   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:12.960579   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:13.427159   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:13.438876   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:13.464698   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:13.929541   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:13.947415   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:13.969612   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:14.432643   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:14.442446   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:14.467022   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:14.925651   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:14.938049   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:14.962986   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.632161    4556 kubeadm.go:322] W0524 20:06:11.628114    1529 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 20:06:15.430640   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:15.441565   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:15.463640   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:15.932254   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:15.947114   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:15.968471   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:16.434507   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:16.449727   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:16.473779   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:16.923657   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:16.934780   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:16.956350   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:17.429985   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:17.441030   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:17.464322   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:17.931369   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:17.941210   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:17.966233   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:18.435534   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:18.446265   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:18.467386   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:18.923047   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:18.934000   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:18.958043   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:19.427096   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:19.438136   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:19.462070   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:19.928866   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:19.938699   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:19.960165   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:24.351871    4556 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 20:06:24.351965    4556 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 20:06:24.352240    4556 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 20:06:24.352574    4556 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 20:06:24.352756    4556 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 20:06:24.352990    4556 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 20:06:24.355726    4556 out.go:204]   - Generating certificates and keys ...
	I0524 20:06:24.355726    4556 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 20:06:24.355726    4556 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost nokubernetes-893100] and IPs [172.27.134.18 127.0.0.1 ::1]
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost nokubernetes-893100] and IPs [172.27.134.18 127.0.0.1 ::1]
	I0524 20:06:24.357726    4556 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 20:06:24.357726    4556 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 20:06:24.357726    4556 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 20:06:24.358732    4556 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 20:06:24.362720    4556 out.go:204]   - Booting up control plane ...
	I0524 20:06:24.362720    4556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 20:06:24.362720    4556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 20:06:24.362720    4556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 20:06:24.362720    4556 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 20:06:24.363750    4556 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 20:06:24.363750    4556 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.003819 seconds
	I0524 20:06:24.363750    4556 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 20:06:24.363750    4556 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 20:06:24.363750    4556 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 20:06:24.364733    4556 kubeadm.go:322] [mark-control-plane] Marking the node nokubernetes-893100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 20:06:24.364733    4556 kubeadm.go:322] [bootstrap-token] Using token: vpphdh.2ag8sqvvjsk8wehw
	I0524 20:06:24.367712    4556 out.go:204]   - Configuring RBAC rules ...
	I0524 20:06:24.367712    4556 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 20:06:24.367712    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 20:06:24.369771    4556 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 20:06:24.369771    4556 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 20:06:24.369771    4556 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 20:06:24.369771    4556 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 20:06:24.369771    4556 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 20:06:24.370733    4556 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 20:06:24.370733    4556 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 20:06:24.370733    4556 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vpphdh.2ag8sqvvjsk8wehw \
	I0524 20:06:24.370733    4556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 \
	I0524 20:06:24.370733    4556 kubeadm.go:322] 	--control-plane 
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vpphdh.2ag8sqvvjsk8wehw \
	I0524 20:06:24.370733    4556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 
	I0524 20:06:24.370733    4556 cni.go:84] Creating CNI manager for ""
	I0524 20:06:24.370733    4556 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:24.373704    4556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 20:06:20.433357   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:20.444510   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:20.474481   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:20.890580   10012 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0524 20:06:20.890652   10012 kubeadm.go:1123] stopping kube-system containers ...
	I0524 20:06:20.900473   10012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:20.945688   10012 docker.go:459] Stopping containers: [ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296]
	I0524 20:06:20.953822   10012 ssh_runner.go:195] Run: docker stop ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296
	I0524 20:06:24.387732    4556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 20:06:24.413737    4556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 20:06:24.460770    4556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 20:06:24.470743    4556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=NoKubernetes-893100 minikube.k8s.io/updated_at=2023_05_24T20_06_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 20:06:24.471712    4556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 20:06:24.559407    4556 ops.go:34] apiserver oom_adj: -16
	I0524 20:06:24.979604    4556 kubeadm.go:1076] duration metric: took 518.8342ms to wait for elevateKubeSystemPrivileges.
	I0524 20:06:24.979604    4556 kubeadm.go:406] StartCluster complete in 18.8747661s
	I0524 20:06:24.979604    4556 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:24.979604    4556 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:06:24.981234    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:24.983214    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 20:06:24.983214    4556 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0524 20:06:24.983214    4556 addons.go:66] Setting storage-provisioner=true in profile "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 addons.go:228] Setting addon storage-provisioner=true in "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 addons.go:66] Setting default-storageclass=true in profile "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 host.go:66] Checking if "NoKubernetes-893100" exists ...
	I0524 20:06:24.983214    4556 config.go:182] Loaded profile config "NoKubernetes-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:24.984213    4556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-893100 ).state
	I0524 20:06:24.984213    4556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-893100 ).state
	I0524 20:06:25.211785    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 20:06:25.563981    4556 kapi.go:248] "coredns" deployment in "kube-system" namespace and "NoKubernetes-893100" context rescaled to 1 replicas
	I0524 20:06:25.563981    4556 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.134.18 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 20:06:25.568118    4556 out.go:177] * Verifying Kubernetes components...
	I0524 20:06:26.340931   10012 ssh_runner.go:235] Completed: docker stop ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296: (5.387112s)
	I0524 20:06:26.353946   10012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0524 20:06:26.415998   10012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 20:06:26.435993   10012 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 24 20:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May 24 20:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 May 24 20:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 May 24 20:03 /etc/kubernetes/scheduler.conf
	
	I0524 20:06:26.454002   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0524 20:06:26.488848   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0524 20:06:26.521267   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0524 20:06:26.538077   10012 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:26.548788   10012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0524 20:06:26.582717   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0524 20:06:26.611300   10012 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:26.636218   10012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0524 20:06:26.674616   10012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:26.696923   10012 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:26.696923   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:26.808540   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:28.718776   10012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.910239s)
	I0524 20:06:28.718887   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.128272   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.291393   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.441004   10012 api_server.go:52] waiting for apiserver process to appear ...
	I0524 20:06:29.456788   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 20:06:29.478798   10012 api_server.go:72] duration metric: took 37.7941ms to wait for apiserver process to appear ...
	I0524 20:06:29.478798   10012 api_server.go:88] waiting for apiserver healthz status ...
	I0524 20:06:29.478798   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:26.560253   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 20:06:26.560426   10868 machine.go:91] provisioned docker machine in 32.1769885s
	I0524 20:06:26.560426   10868 start.go:300] post-start starting for "running-upgrade-893100" (driver="hyperv")
	I0524 20:06:26.560426   10868 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 20:06:26.575492   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 20:06:26.575492   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:28.838356   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:28.838356   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:28.838933   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:28.959761   10868 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.3841871s)
	I0524 20:06:28.972843   10868 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 20:06:28.980533   10868 info.go:137] Remote host: Buildroot 2019.02.7
	I0524 20:06:28.980613   10868 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 20:06:28.980976   10868 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 20:06:28.982117   10868 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 20:06:28.995954   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 20:06:29.013022   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 20:06:29.050228   10868 start.go:303] post-start completed in 2.4898051s
	I0524 20:06:29.050228   10868 fix.go:57] fixHost completed within 36.3092251s
	I0524 20:06:29.050228   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:29.975618   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:29.975688   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:29.975688   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 20:04:50 UTC, ends at Wed 2023-05-24 20:06:35 UTC. --
	May 24 20:06:13 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:13.321455320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:13 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:13.337481794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:13 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:13.337577094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:13 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:13.337612395Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:13 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:13.337631895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:14 NoKubernetes-893100 cri-dockerd[1363]: time="2023-05-24T20:06:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/01f146ab906ef2151a7b3fdb1987eff90b66bba96b2415156c894bfcb5044436/resolv.conf as [nameserver 172.27.128.1]"
	May 24 20:06:14 NoKubernetes-893100 cri-dockerd[1363]: time="2023-05-24T20:06:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f6c4d96fa2a487b32a55f374952b464e2077bac742ebe56feaef7c3df7bc18d5/resolv.conf as [nameserver 172.27.128.1]"
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.449465077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.449998280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.450029580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.450043280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:14 NoKubernetes-893100 cri-dockerd[1363]: time="2023-05-24T20:06:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/aa7efbbc7ff7394098d4d41fcd4d9d197210747d1b76eaac30b9f8a200d32d12/resolv.conf as [nameserver 172.27.128.1]"
	May 24 20:06:14 NoKubernetes-893100 cri-dockerd[1363]: time="2023-05-24T20:06:14Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/95af0576863646749334080156194b0ccf4be39bc33b1bfc79f750406ec568df/resolv.conf as [nameserver 172.27.128.1]"
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.774795980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.774892480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.774927080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.775694984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.952707047Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.965023600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.965160100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:14 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:14.965293201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:15 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:15.010646894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:15 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:15.011084195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:15 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:15.011195596Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:15 NoKubernetes-893100 dockerd[1175]: time="2023-05-24T20:06:15.011289896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	d934cf964492b       89e70da428d29       21 seconds ago      Running             kube-scheduler            0                   95af057686364
	e23e9e923686f       86b6af7dd652c       21 seconds ago      Running             etcd                      0                   aa7efbbc7ff73
	14ed10c049e32       ac2b7465ebba9       21 seconds ago      Running             kube-controller-manager   0                   f6c4d96fa2a48
	c97f0548851e8       c5b13e4f7806d       21 seconds ago      Running             kube-apiserver            0                   01f146ab906ef
	
	* 
	* ==> describe nodes <==
	* Name:               nokubernetes-893100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=nokubernetes-893100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=NoKubernetes-893100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T20_06_24_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 20:06:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  nokubernetes-893100
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 20:06:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 20:06:29 +0000   Wed, 24 May 2023 20:06:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 20:06:29 +0000   Wed, 24 May 2023 20:06:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 20:06:29 +0000   Wed, 24 May 2023 20:06:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 20:06:29 +0000   Wed, 24 May 2023 20:06:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.134.18
	  Hostname:    nokubernetes-893100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925712Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             5925712Ki
	  pods:               110
	System Info:
	  Machine ID:                 2835c050d6414211a08de49ef620232c
	  System UUID:                24ebd646-8482-dc4b-a15f-3ff76025fd03
	  Boot ID:                    8f0d139f-ed1d-49e6-b314-9aab3a88cfca
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-nokubernetes-893100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         12s
	  kube-system                 kube-apiserver-nokubernetes-893100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kube-system                 kube-controller-manager-nokubernetes-893100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kube-system                 kube-scheduler-nokubernetes-893100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  23s (x8 over 23s)  kubelet          Node nokubernetes-893100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 23s)  kubelet          Node nokubernetes-893100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 23s)  kubelet          Node nokubernetes-893100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 11s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s                kubelet          Node nokubernetes-893100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s                kubelet          Node nokubernetes-893100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s                kubelet          Node nokubernetes-893100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6s                 kubelet          Node nokubernetes-893100 status is now: NodeReady
	  Normal  RegisteredNode           0s                 node-controller  Node nokubernetes-893100 event: Registered Node nokubernetes-893100 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000070] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.680348] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.447429] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +1.283894] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +8.517769] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[May24 20:05] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.181126] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[ +22.360681] systemd-fstab-generator[931]: Ignoring "noauto" for root device
	[ +19.087367] kauditd_printk_skb: 14 callbacks suppressed
	[  +1.176528] systemd-fstab-generator[1094]: Ignoring "noauto" for root device
	[May24 20:06] systemd-fstab-generator[1136]: Ignoring "noauto" for root device
	[  +0.252232] systemd-fstab-generator[1147]: Ignoring "noauto" for root device
	[  +0.250423] systemd-fstab-generator[1160]: Ignoring "noauto" for root device
	[  +2.218062] systemd-fstab-generator[1308]: Ignoring "noauto" for root device
	[  +0.191870] systemd-fstab-generator[1319]: Ignoring "noauto" for root device
	[  +0.205491] systemd-fstab-generator[1330]: Ignoring "noauto" for root device
	[  +0.207364] systemd-fstab-generator[1341]: Ignoring "noauto" for root device
	[  +0.224523] systemd-fstab-generator[1355]: Ignoring "noauto" for root device
	[  +7.624974] systemd-fstab-generator[1614]: Ignoring "noauto" for root device
	[  +0.672976] kauditd_printk_skb: 68 callbacks suppressed
	[ +11.579557] hrtimer: interrupt took 1111802 ns
	[  +0.210240] systemd-fstab-generator[2672]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [e23e9e923686] <==
	* {"level":"info","ts":"2023-05-24T20:06:16.751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a77b1f900ec1324e switched to configuration voters=(12068274330052670030)"}
	{"level":"info","ts":"2023-05-24T20:06:16.754Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9a379597574d8f15","local-member-id":"a77b1f900ec1324e","added-peer-id":"a77b1f900ec1324e","added-peer-peer-urls":["https://172.27.134.18:2380"]}
	{"level":"info","ts":"2023-05-24T20:06:16.779Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-05-24T20:06:16.780Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"a77b1f900ec1324e","initial-advertise-peer-urls":["https://172.27.134.18:2380"],"listen-peer-urls":["https://172.27.134.18:2380"],"advertise-client-urls":["https://172.27.134.18:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.134.18:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-24T20:06:16.780Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-24T20:06:16.780Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.27.134.18:2380"}
	{"level":"info","ts":"2023-05-24T20:06:16.780Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.27.134.18:2380"}
	{"level":"info","ts":"2023-05-24T20:06:17.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a77b1f900ec1324e is starting a new election at term 1"}
	{"level":"info","ts":"2023-05-24T20:06:17.491Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a77b1f900ec1324e became pre-candidate at term 1"}
	{"level":"info","ts":"2023-05-24T20:06:17.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a77b1f900ec1324e received MsgPreVoteResp from a77b1f900ec1324e at term 1"}
	{"level":"info","ts":"2023-05-24T20:06:17.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a77b1f900ec1324e became candidate at term 2"}
	{"level":"info","ts":"2023-05-24T20:06:17.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a77b1f900ec1324e received MsgVoteResp from a77b1f900ec1324e at term 2"}
	{"level":"info","ts":"2023-05-24T20:06:17.492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a77b1f900ec1324e became leader at term 2"}
	{"level":"info","ts":"2023-05-24T20:06:17.493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a77b1f900ec1324e elected leader a77b1f900ec1324e at term 2"}
	{"level":"info","ts":"2023-05-24T20:06:17.499Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T20:06:17.506Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9a379597574d8f15","local-member-id":"a77b1f900ec1324e","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T20:06:17.506Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T20:06:17.507Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T20:06:17.507Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a77b1f900ec1324e","local-member-attributes":"{Name:nokubernetes-893100 ClientURLs:[https://172.27.134.18:2379]}","request-path":"/0/members/a77b1f900ec1324e/attributes","cluster-id":"9a379597574d8f15","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T20:06:17.507Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T20:06:17.514Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.27.134.18:2379"}
	{"level":"info","ts":"2023-05-24T20:06:17.507Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T20:06:17.517Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T20:06:17.519Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T20:06:17.535Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  20:06:36 up 1 min,  0 users,  load average: 1.47, 0.50, 0.18
	Linux NoKubernetes-893100 5.10.57 #1 SMP Sat May 20 03:22:25 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c97f0548851e] <==
	* I0524 20:06:19.840918       1 cache.go:39] Caches are synced for autoregister controller
	I0524 20:06:19.918443       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0524 20:06:19.918979       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0524 20:06:19.937981       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0524 20:06:19.941739       1 controller.go:624] quota admission added evaluator for: namespaces
	I0524 20:06:19.942018       1 shared_informer.go:318] Caches are synced for configmaps
	I0524 20:06:19.946863       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0524 20:06:19.947693       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0524 20:06:19.948766       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0524 20:06:19.951884       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0524 20:06:20.155937       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0524 20:06:20.229986       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0524 20:06:20.757350       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0524 20:06:20.767325       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0524 20:06:20.767342       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0524 20:06:22.195455       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0524 20:06:22.317037       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0524 20:06:22.473976       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0524 20:06:22.491284       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.27.134.18]
	I0524 20:06:22.493001       1 controller.go:624] quota admission added evaluator for: endpoints
	I0524 20:06:22.505323       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0524 20:06:22.853529       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0524 20:06:24.222128       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0524 20:06:24.253200       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0524 20:06:24.292678       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [14ed10c049e3] <==
	* I0524 20:06:35.806900       1 range_allocator.go:380] "Set node PodCIDR" node="nokubernetes-893100" podCIDRs=[10.244.0.0/24]
	I0524 20:06:35.809205       1 shared_informer.go:318] Caches are synced for disruption
	I0524 20:06:35.811974       1 shared_informer.go:318] Caches are synced for GC
	I0524 20:06:35.813196       1 shared_informer.go:318] Caches are synced for daemon sets
	I0524 20:06:35.814483       1 shared_informer.go:318] Caches are synced for endpoint
	I0524 20:06:35.823592       1 shared_informer.go:318] Caches are synced for ephemeral
	I0524 20:06:35.825320       1 shared_informer.go:318] Caches are synced for TTL
	I0524 20:06:35.830007       1 shared_informer.go:318] Caches are synced for job
	I0524 20:06:35.830304       1 shared_informer.go:318] Caches are synced for taint
	I0524 20:06:35.831674       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0524 20:06:35.832133       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="nokubernetes-893100"
	I0524 20:06:35.832820       1 shared_informer.go:318] Caches are synced for deployment
	I0524 20:06:35.833186       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0524 20:06:35.833842       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0524 20:06:35.833899       1 taint_manager.go:211] "Sending events to api server"
	I0524 20:06:35.850708       1 shared_informer.go:318] Caches are synced for PV protection
	I0524 20:06:35.851082       1 event.go:307] "Event occurred" object="nokubernetes-893100" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node nokubernetes-893100 event: Registered Node nokubernetes-893100 in Controller"
	I0524 20:06:35.852961       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 20:06:35.860118       1 shared_informer.go:318] Caches are synced for expand
	I0524 20:06:35.865007       1 shared_informer.go:318] Caches are synced for persistent volume
	I0524 20:06:35.872974       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 20:06:35.923282       1 shared_informer.go:318] Caches are synced for attach detach
	I0524 20:06:36.254910       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 20:06:36.324897       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 20:06:36.325083       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-scheduler [d934cf964492] <==
	* W0524 20:06:21.111171       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0524 20:06:21.111228       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 20:06:21.131426       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 20:06:21.131552       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 20:06:21.135887       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0524 20:06:21.135981       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0524 20:06:21.139071       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 20:06:21.139454       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0524 20:06:21.165582       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0524 20:06:21.165616       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0524 20:06:21.255585       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0524 20:06:21.255843       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0524 20:06:21.283573       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0524 20:06:21.283631       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0524 20:06:21.342351       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0524 20:06:21.342749       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0524 20:06:21.447123       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 20:06:21.447158       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 20:06:21.463573       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 20:06:21.463697       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 20:06:21.487587       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 20:06:21.487704       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 20:06:21.504294       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0524 20:06:21.504670       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0524 20:06:24.190818       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 20:04:50 UTC, ends at Wed 2023-05-24 20:06:36 UTC. --
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.792375    2700 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.829532    2700 topology_manager.go:212] "Topology Admit Handler"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.830509    2700 topology_manager.go:212] "Topology Admit Handler"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.830745    2700 topology_manager.go:212] "Topology Admit Handler"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.831163    2700 topology_manager.go:212] "Topology Admit Handler"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: E0524 20:06:24.869515    2700 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"etcd-nokubernetes-893100\" already exists" pod="kube-system/etcd-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.977686    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/f73d3561f4d7fcb59ab45bba23a51ef6-etcd-certs\") pod \"etcd-nokubernetes-893100\" (UID: \"f73d3561f4d7fcb59ab45bba23a51ef6\") " pod="kube-system/etcd-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.977736    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/f73d3561f4d7fcb59ab45bba23a51ef6-etcd-data\") pod \"etcd-nokubernetes-893100\" (UID: \"f73d3561f4d7fcb59ab45bba23a51ef6\") " pod="kube-system/etcd-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.977765    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ae197192265d1abdefc35395bb17105-ca-certs\") pod \"kube-apiserver-nokubernetes-893100\" (UID: \"3ae197192265d1abdefc35395bb17105\") " pod="kube-system/kube-apiserver-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.977792    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ae197192265d1abdefc35395bb17105-k8s-certs\") pod \"kube-apiserver-nokubernetes-893100\" (UID: \"3ae197192265d1abdefc35395bb17105\") " pod="kube-system/kube-apiserver-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.977874    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ae197192265d1abdefc35395bb17105-usr-share-ca-certificates\") pod \"kube-apiserver-nokubernetes-893100\" (UID: \"3ae197192265d1abdefc35395bb17105\") " pod="kube-system/kube-apiserver-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.977975    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda35358a6d270e95ee3143c5c79ddc7-ca-certs\") pod \"kube-controller-manager-nokubernetes-893100\" (UID: \"cda35358a6d270e95ee3143c5c79ddc7\") " pod="kube-system/kube-controller-manager-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.978017    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cda35358a6d270e95ee3143c5c79ddc7-flexvolume-dir\") pod \"kube-controller-manager-nokubernetes-893100\" (UID: \"cda35358a6d270e95ee3143c5c79ddc7\") " pod="kube-system/kube-controller-manager-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.978042    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda35358a6d270e95ee3143c5c79ddc7-k8s-certs\") pod \"kube-controller-manager-nokubernetes-893100\" (UID: \"cda35358a6d270e95ee3143c5c79ddc7\") " pod="kube-system/kube-controller-manager-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.978078    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cda35358a6d270e95ee3143c5c79ddc7-kubeconfig\") pod \"kube-controller-manager-nokubernetes-893100\" (UID: \"cda35358a6d270e95ee3143c5c79ddc7\") " pod="kube-system/kube-controller-manager-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.978107    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda35358a6d270e95ee3143c5c79ddc7-usr-share-ca-certificates\") pod \"kube-controller-manager-nokubernetes-893100\" (UID: \"cda35358a6d270e95ee3143c5c79ddc7\") " pod="kube-system/kube-controller-manager-nokubernetes-893100"
	May 24 20:06:24 NoKubernetes-893100 kubelet[2700]: I0524 20:06:24.978132    2700 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/597795c233d1c8c9d8a5add9d8680ef4-kubeconfig\") pod \"kube-scheduler-nokubernetes-893100\" (UID: \"597795c233d1c8c9d8a5add9d8680ef4\") " pod="kube-system/kube-scheduler-nokubernetes-893100"
	May 24 20:06:25 NoKubernetes-893100 kubelet[2700]: I0524 20:06:25.392263    2700 apiserver.go:52] "Watching apiserver"
	May 24 20:06:25 NoKubernetes-893100 kubelet[2700]: I0524 20:06:25.475923    2700 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	May 24 20:06:25 NoKubernetes-893100 kubelet[2700]: I0524 20:06:25.481376    2700 reconciler.go:41] "Reconciler: start to sync state"
	May 24 20:06:25 NoKubernetes-893100 kubelet[2700]: I0524 20:06:25.772079    2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-nokubernetes-893100" podStartSLOduration=1.771989133 podCreationTimestamp="2023-05-24 20:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 20:06:25.748688884 +0000 UTC m=+1.575776358" watchObservedRunningTime="2023-05-24 20:06:25.771989133 +0000 UTC m=+1.599076507"
	May 24 20:06:25 NoKubernetes-893100 kubelet[2700]: I0524 20:06:25.811264    2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-nokubernetes-893100" podStartSLOduration=1.811161116 podCreationTimestamp="2023-05-24 20:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 20:06:25.774264538 +0000 UTC m=+1.601352012" watchObservedRunningTime="2023-05-24 20:06:25.811161116 +0000 UTC m=+1.638248490"
	May 24 20:06:25 NoKubernetes-893100 kubelet[2700]: I0524 20:06:25.844769    2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-nokubernetes-893100" podStartSLOduration=1.8447233870000002 podCreationTimestamp="2023-05-24 20:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 20:06:25.811338017 +0000 UTC m=+1.638425491" watchObservedRunningTime="2023-05-24 20:06:25.844723387 +0000 UTC m=+1.671810761"
	May 24 20:06:25 NoKubernetes-893100 kubelet[2700]: I0524 20:06:25.893573    2700 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-nokubernetes-893100" podStartSLOduration=2.893528591 podCreationTimestamp="2023-05-24 20:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-05-24 20:06:25.844902788 +0000 UTC m=+1.671990262" watchObservedRunningTime="2023-05-24 20:06:25.893528591 +0000 UTC m=+1.720615965"
	May 24 20:06:29 NoKubernetes-893100 kubelet[2700]: I0524 20:06:29.208710    2700 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p NoKubernetes-893100 -n NoKubernetes-893100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p NoKubernetes-893100 -n NoKubernetes-893100: (6.0981783s)
helpers_test.go:261: (dbg) Run:  kubectl --context NoKubernetes-893100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestNoKubernetes/serial/StartWithK8s FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestNoKubernetes/serial/StartWithK8s (317.38s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (227.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-893100 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-893100 --alsologtostderr -v=1 --driver=hyperv: (2m57.4103648s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-893100] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node pause-893100 in cluster pause-893100
	* Updating the running hyperv "pause-893100" VM ...
	* Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	  - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	* Done! kubectl is now configured to use "pause-893100" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 20:03:50.119470   10012 out.go:296] Setting OutFile to fd 932 ...
	I0524 20:03:50.182517   10012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:03:50.182517   10012 out.go:309] Setting ErrFile to fd 1588...
	I0524 20:03:50.182517   10012 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:03:50.202457   10012 out.go:303] Setting JSON to false
	I0524 20:03:50.211462   10012 start.go:125] hostinfo: {"hostname":"minikube1","uptime":7143,"bootTime":1684951486,"procs":162,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 20:03:50.211462   10012 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 20:03:50.349570   10012 out.go:177] * [pause-893100] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 20:03:50.493262   10012 notify.go:220] Checking for updates...
	I0524 20:03:50.497654   10012 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:03:50.686525   10012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 20:03:50.839199   10012 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 20:03:51.035399   10012 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 20:03:51.292966   10012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 20:03:51.434870   10012 config.go:182] Loaded profile config "pause-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:03:51.435792   10012 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 20:03:53.204508   10012 out.go:177] * Using the hyperv driver based on existing profile
	I0524 20:03:53.206634   10012 start.go:295] selected driver: hyperv
	I0524 20:03:53.206634   10012 start.go:870] validating driver "hyperv" against &{Name:pause-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:pause-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.136.175 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:
false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:03:53.207510   10012 start.go:881] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 20:03:53.276508   10012 cni.go:84] Creating CNI manager for ""
	I0524 20:03:53.276508   10012 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:03:53.276508   10012 start_flags.go:319] config:
	{Name:pause-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:pause-893100 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.136.175 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registr
y-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:03:53.277357   10012 iso.go:125] acquiring lock: {Name:mk3b29db369ab0f922ac5eeb788beee87e18ec94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:03:53.404105   10012 out.go:177] * Starting control plane node pause-893100 in cluster pause-893100
	I0524 20:03:53.543754   10012 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:03:53.544113   10012 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0524 20:03:53.544113   10012 cache.go:57] Caching tarball of preloaded images
	I0524 20:03:53.544598   10012 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0524 20:03:53.544758   10012 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 20:03:53.544969   10012 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\config.json ...
	I0524 20:03:53.549211   10012 cache.go:195] Successfully downloaded all kic artifacts
	I0524 20:03:53.549336   10012 start.go:364] acquiring machines lock for pause-893100: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 20:05:27.312145   10012 start.go:368] acquired machines lock for "pause-893100" in 1m33.762862s
	I0524 20:05:27.312145   10012 start.go:96] Skipping create...Using existing machine configuration
	I0524 20:05:27.312145   10012 fix.go:55] fixHost starting: 
	I0524 20:05:27.312817   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:28.150138   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:28.150235   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:28.150269   10012 fix.go:103] recreateIfNeeded on pause-893100: state=Running err=<nil>
	W0524 20:05:28.150269   10012 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 20:05:28.155561   10012 out.go:177] * Updating the running hyperv "pause-893100" VM ...
	I0524 20:05:28.162930   10012 machine.go:88] provisioning docker machine ...
	I0524 20:05:28.162930   10012 buildroot.go:166] provisioning hostname "pause-893100"
	I0524 20:05:28.162930   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:28.977760   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:28.977760   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:28.977829   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:30.238292   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:30.238292   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:30.243860   10012 main.go:141] libmachine: Using SSH client type: native
	I0524 20:05:30.245422   10012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.136.175 22 <nil> <nil>}
	I0524 20:05:30.245582   10012 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-893100 && echo "pause-893100" | sudo tee /etc/hostname
	I0524 20:05:30.425364   10012 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-893100
	
	I0524 20:05:30.425364   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:31.263776   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:31.263776   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:31.263776   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:32.526371   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:32.526371   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:32.533395   10012 main.go:141] libmachine: Using SSH client type: native
	I0524 20:05:32.535414   10012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.136.175 22 <nil> <nil>}
	I0524 20:05:32.535414   10012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-893100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-893100/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-893100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 20:05:32.676897   10012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 20:05:32.676897   10012 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0524 20:05:32.676897   10012 buildroot.go:174] setting up certificates
	I0524 20:05:32.676897   10012 provision.go:83] configureAuth start
	I0524 20:05:32.676897   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:33.628872   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:33.628872   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:33.628872   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:34.994436   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:34.994436   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:34.994633   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:35.817868   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:35.817984   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:35.817984   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:37.003062   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:37.003062   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:37.003062   10012 provision.go:138] copyHostCerts
	I0524 20:05:37.003847   10012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 20:05:37.003847   10012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 20:05:37.003847   10012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 20:05:37.005837   10012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 20:05:37.005837   10012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 20:05:37.005837   10012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 20:05:37.007815   10012 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 20:05:37.007815   10012 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 20:05:37.007815   10012 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 20:05:37.008811   10012 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-893100 san=[172.27.136.175 172.27.136.175 localhost 127.0.0.1 minikube pause-893100]
	I0524 20:05:37.212583   10012 provision.go:172] copyRemoteCerts
	I0524 20:05:37.222573   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 20:05:37.222573   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:38.062300   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:38.062300   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:38.062300   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:39.232586   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:39.232586   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:39.232586   10012 sshutil.go:53] new ssh client: &{IP:172.27.136.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-893100\id_rsa Username:docker}
	I0524 20:05:39.359261   10012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.136666s)
	I0524 20:05:39.359261   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 20:05:39.403533   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1212 bytes)
	I0524 20:05:39.456454   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 20:05:39.521684   10012 provision.go:86] duration metric: configureAuth took 6.8447918s
	I0524 20:05:39.521684   10012 buildroot.go:189] setting minikube options for container-runtime
	I0524 20:05:39.522339   10012 config.go:182] Loaded profile config "pause-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:05:39.522339   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:40.756866   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:40.757092   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:40.757265   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:41.927003   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:41.927184   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:41.931380   10012 main.go:141] libmachine: Using SSH client type: native
	I0524 20:05:41.932145   10012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.136.175 22 <nil> <nil>}
	I0524 20:05:41.932145   10012 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 20:05:42.085937   10012 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 20:05:42.085937   10012 buildroot.go:70] root file system type: tmpfs
	I0524 20:05:42.086169   10012 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 20:05:42.086169   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:42.886752   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:42.886846   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:42.886846   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:44.030923   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:44.030923   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:44.035070   10012 main.go:141] libmachine: Using SSH client type: native
	I0524 20:05:44.035887   10012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.136.175 22 <nil> <nil>}
	I0524 20:05:44.035887   10012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 20:05:44.218121   10012 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 20:05:44.218205   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:45.007156   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:45.007243   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:45.007243   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:46.157500   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:46.157592   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:46.161703   10012 main.go:141] libmachine: Using SSH client type: native
	I0524 20:05:46.162317   10012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.136.175 22 <nil> <nil>}
	I0524 20:05:46.162317   10012 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 20:05:46.344366   10012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 20:05:46.344366   10012 machine.go:91] provisioned docker machine in 18.1814473s
	I0524 20:05:46.344366   10012 start.go:300] post-start starting for "pause-893100" (driver="hyperv")
	I0524 20:05:46.344366   10012 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 20:05:46.355801   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 20:05:46.355801   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:47.124854   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:47.124854   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:47.124970   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:48.270015   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:48.270015   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:48.270015   10012 sshutil.go:53] new ssh client: &{IP:172.27.136.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-893100\id_rsa Username:docker}
	I0524 20:05:48.387193   10012 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.0313933s)
	I0524 20:05:48.402269   10012 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 20:05:48.411876   10012 info.go:137] Remote host: Buildroot 2021.02.12
	I0524 20:05:48.411962   10012 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 20:05:48.412379   10012 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 20:05:48.413426   10012 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 20:05:48.423255   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 20:05:48.441884   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 20:05:48.487009   10012 start.go:303] post-start completed in 2.1426438s
	I0524 20:05:48.487081   10012 fix.go:57] fixHost completed within 21.1748764s
	I0524 20:05:48.487081   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:49.318032   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:49.318032   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:49.318124   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:50.461397   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:50.461583   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:50.466068   10012 main.go:141] libmachine: Using SSH client type: native
	I0524 20:05:50.466749   10012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.136.175 22 <nil> <nil>}
	I0524 20:05:50.466749   10012 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0524 20:05:50.625367   10012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684958750.609295376
	
	I0524 20:05:50.625449   10012 fix.go:207] guest clock: 1684958750.609295376
	I0524 20:05:50.625470   10012 fix.go:220] Guest: 2023-05-24 20:05:50.609295376 +0000 UTC Remote: 2023-05-24 20:05:48.4870817 +0000 UTC m=+118.444597701 (delta=2.122213676s)
	I0524 20:05:50.625531   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:51.425069   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:51.425069   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:51.425069   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:52.566002   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:52.566174   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:52.569183   10012 main.go:141] libmachine: Using SSH client type: native
	I0524 20:05:52.571218   10012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.136.175 22 <nil> <nil>}
	I0524 20:05:52.571280   10012 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1684958750
	I0524 20:05:52.740703   10012 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May 24 20:05:50 UTC 2023
	
	I0524 20:05:52.740801   10012 fix.go:227] clock set: Wed May 24 20:05:50 UTC 2023
	 (err=<nil>)
	I0524 20:05:52.740801   10012 start.go:83] releasing machines lock for "pause-893100", held for 25.4286714s
	I0524 20:05:52.741024   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:53.589034   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:53.589034   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:53.589110   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:54.814317   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:54.814630   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:54.817400   10012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 20:05:54.817973   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:54.826521   10012 ssh_runner.go:195] Run: cat /version.json
	I0524 20:05:54.826521   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-893100 ).state
	I0524 20:05:55.678592   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:55.678592   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:55.678592   10012 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:05:55.678691   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:55.678691   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:55.678889   10012 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:05:56.946962   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:56.947056   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:56.947772   10012 sshutil.go:53] new ssh client: &{IP:172.27.136.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-893100\id_rsa Username:docker}
	I0524 20:05:56.978742   10012 main.go:141] libmachine: [stdout =====>] : 172.27.136.175
	
	I0524 20:05:56.978802   10012 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:05:56.978802   10012 sshutil.go:53] new ssh client: &{IP:172.27.136.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\pause-893100\id_rsa Username:docker}
	I0524 20:05:57.126983   10012 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.3095839s)
	I0524 20:05:57.126983   10012 ssh_runner.go:235] Completed: cat /version.json: (2.3004624s)
	I0524 20:05:57.138516   10012 ssh_runner.go:195] Run: systemctl --version
	I0524 20:05:57.159150   10012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 20:05:57.167070   10012 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 20:05:57.180894   10012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0524 20:05:57.210136   10012 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0524 20:05:57.210270   10012 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:05:57.219888   10012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:05:57.261075   10012 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:05:57.261075   10012 docker.go:563] Images already preloaded, skipping extraction
	I0524 20:05:57.261075   10012 start.go:481] detecting cgroup driver to use...
	I0524 20:05:57.261075   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:05:57.310196   10012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 20:05:57.346109   10012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 20:05:57.363835   10012 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 20:05:57.373589   10012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 20:05:57.407063   10012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:05:57.437044   10012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 20:05:57.472279   10012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:05:57.501923   10012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 20:05:57.532671   10012 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 20:05:57.561699   10012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 20:05:57.588024   10012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 20:05:57.622791   10012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:05:57.868410   10012 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 20:05:57.901227   10012 start.go:481] detecting cgroup driver to use...
	I0524 20:05:57.911684   10012 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 20:05:57.942991   10012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:05:57.981778   10012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 20:05:58.020444   10012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:05:58.056148   10012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:05:58.080655   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:05:58.129287   10012 ssh_runner.go:195] Run: which cri-dockerd
	I0524 20:05:58.153280   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 20:05:58.172317   10012 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 20:05:58.223031   10012 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 20:05:58.509710   10012 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 20:05:58.763994   10012 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 20:05:58.764064   10012 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 20:05:58.811848   10012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:05:59.083727   10012 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 20:06:07.025821   10012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.9420978s)
	I0524 20:06:07.035631   10012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:07.286145   10012 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 20:06:07.528622   10012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:07.847079   10012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:08.145384   10012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 20:06:08.218382   10012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:08.594082   10012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 20:06:08.952592   10012 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 20:06:08.962588   10012 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 20:06:08.978277   10012 start.go:549] Will wait 60s for crictl version
	I0524 20:06:08.991301   10012 ssh_runner.go:195] Run: which crictl
	I0524 20:06:09.010872   10012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 20:06:09.158427   10012 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 20:06:09.167406   10012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:09.235215   10012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:09.293679   10012 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 20:06:09.293679   10012 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 20:06:09.303152   10012 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 20:06:09.304149   10012 ip.go:210] interface addr: 172.27.128.1/20
	I0524 20:06:09.316137   10012 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 20:06:09.324681   10012 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:06:09.332631   10012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:09.369439   10012 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:09.369531   10012 docker.go:563] Images already preloaded, skipping extraction
	I0524 20:06:09.378738   10012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:09.416677   10012 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:09.416752   10012 cache_images.go:84] Images are preloaded, skipping loading
	I0524 20:06:09.424464   10012 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 20:06:09.475847   10012 cni.go:84] Creating CNI manager for ""
	I0524 20:06:09.475914   10012 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:09.475914   10012 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 20:06:09.475982   10012 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.136.175 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-893100 NodeName:pause-893100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.136.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.136.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 20:06:09.476249   10012 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.136.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-893100"
	  kubeletExtraArgs:
	    node-ip: 172.27.136.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.136.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 20:06:09.476452   10012 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-893100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.136.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:pause-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 20:06:09.485138   10012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 20:06:09.504201   10012 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 20:06:09.513369   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 20:06:09.532371   10012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0524 20:06:09.564859   10012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 20:06:09.594750   10012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0524 20:06:09.633134   10012 ssh_runner.go:195] Run: grep 172.27.136.175	control-plane.minikube.internal$ /etc/hosts
	I0524 20:06:09.645572   10012 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100 for IP: 172.27.136.175
	I0524 20:06:09.645572   10012 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:09.646787   10012 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 20:06:09.647146   10012 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 20:06:09.648111   10012 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\client.key
	I0524 20:06:09.648492   10012 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.key.85da34c2
	I0524 20:06:09.648994   10012 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.key
	I0524 20:06:09.650350   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 20:06:09.650774   10012 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 20:06:09.650774   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 20:06:09.651481   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 20:06:09.651481   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 20:06:09.652160   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 20:06:09.652338   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 20:06:09.654026   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 20:06:09.697087   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 20:06:09.743829   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 20:06:09.789241   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 20:06:09.832568   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 20:06:09.878635   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 20:06:09.922855   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 20:06:09.967076   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 20:06:10.011306   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 20:06:10.054388   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 20:06:10.101750   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 20:06:10.152071   10012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 20:06:10.211751   10012 ssh_runner.go:195] Run: openssl version
	I0524 20:06:10.229774   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 20:06:10.266275   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.277278   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.287269   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.310865   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 20:06:10.340942   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 20:06:10.366956   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.375774   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.384953   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.405947   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 20:06:10.444116   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 20:06:10.474716   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.482279   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.492728   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.512758   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 20:06:10.583724   10012 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 20:06:10.605736   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0524 20:06:10.624468   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0524 20:06:10.643480   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0524 20:06:10.669632   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0524 20:06:10.689627   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0524 20:06:10.708343   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0524 20:06:10.717465   10012 kubeadm.go:404] StartCluster: {Name:pause-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:pause-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.136.175 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-
security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:06:10.726272   10012 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:10.777160   10012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 20:06:10.801082   10012 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0524 20:06:10.801082   10012 kubeadm.go:636] restartCluster start
	I0524 20:06:10.811686   10012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0524 20:06:10.843193   10012 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:10.843880   10012 kubeconfig.go:92] found "pause-893100" server: "https://172.27.136.175:8443"
	I0524 20:06:10.845890   10012 kapi.go:59] client config for pause-893100: &rest.Config{Host:"https://172.27.136.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 20:06:10.856500   10012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0524 20:06:10.883825   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:10.893143   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:10.920443   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.421350   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:11.430355   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:11.451625   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.929976   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:11.939775   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:11.970377   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:12.436501   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:12.447814   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:12.471467   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:12.926501   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:12.937214   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:12.960579   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:13.427159   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:13.438876   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:13.464698   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:13.929541   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:13.947415   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:13.969612   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:14.432643   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:14.442446   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:14.467022   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:14.925651   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:14.938049   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:14.962986   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:15.430640   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:15.441565   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:15.463640   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:15.932254   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:15.947114   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:15.968471   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:16.434507   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:16.449727   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:16.473779   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:16.923657   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:16.934780   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:16.956350   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:17.429985   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:17.441030   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:17.464322   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:17.931369   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:17.941210   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:17.966233   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:18.435534   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:18.446265   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:18.467386   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:18.923047   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:18.934000   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:18.958043   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:19.427096   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:19.438136   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:19.462070   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:19.928866   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:19.938699   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:19.960165   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:20.433357   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:20.444510   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:20.474481   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:20.890580   10012 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0524 20:06:20.890652   10012 kubeadm.go:1123] stopping kube-system containers ...
	I0524 20:06:20.900473   10012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:20.945688   10012 docker.go:459] Stopping containers: [ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296]
	I0524 20:06:20.953822   10012 ssh_runner.go:195] Run: docker stop ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296
	I0524 20:06:26.340931   10012 ssh_runner.go:235] Completed: docker stop ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296: (5.387112s)
	I0524 20:06:26.353946   10012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0524 20:06:26.415998   10012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 20:06:26.435993   10012 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 24 20:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May 24 20:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 May 24 20:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 May 24 20:03 /etc/kubernetes/scheduler.conf
	
	I0524 20:06:26.454002   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0524 20:06:26.488848   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0524 20:06:26.521267   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0524 20:06:26.538077   10012 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:26.548788   10012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0524 20:06:26.582717   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0524 20:06:26.611300   10012 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:26.636218   10012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0524 20:06:26.674616   10012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:26.696923   10012 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:26.696923   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:26.808540   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:28.718776   10012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.910239s)
	I0524 20:06:28.718887   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.128272   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.291393   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.441004   10012 api_server.go:52] waiting for apiserver process to appear ...
	I0524 20:06:29.456788   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 20:06:29.478798   10012 api_server.go:72] duration metric: took 37.7941ms to wait for apiserver process to appear ...
	I0524 20:06:29.478798   10012 api_server.go:88] waiting for apiserver healthz status ...
	I0524 20:06:29.478798   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:34.493381   10012 api_server.go:269] stopped: https://172.27.136.175:8443/healthz: Get "https://172.27.136.175:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0524 20:06:34.998973   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:35.017204   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 20:06:35.017271   10012 api_server.go:103] status: https://172.27.136.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 20:06:35.503294   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:35.519446   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 20:06:35.519446   10012 api_server.go:103] status: https://172.27.136.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 20:06:36.006618   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:36.020613   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 20:06:36.020613   10012 api_server.go:103] status: https://172.27.136.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 20:06:36.496433   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:36.510434   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 200:
	ok
	I0524 20:06:36.539217   10012 api_server.go:141] control plane version: v1.27.2
	I0524 20:06:36.539217   10012 api_server.go:131] duration metric: took 7.06043s to wait for apiserver health ...
	I0524 20:06:36.539217   10012 cni.go:84] Creating CNI manager for ""
	I0524 20:06:36.539217   10012 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:36.543083   10012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 20:06:36.566890   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 20:06:36.594851   10012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 20:06:36.742044   10012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 20:06:36.793990   10012 system_pods.go:59] 6 kube-system pods found
	I0524 20:06:36.793990   10012 system_pods.go:61] "coredns-5d78c9869d-ngwxf" [5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0524 20:06:36.793990   10012 system_pods.go:61] "etcd-pause-893100" [042c47b3-76c5-49a8-be92-2eece9ec9522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-apiserver-pause-893100" [22d4a079-779f-458c-b323-4c7f578ddf80] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-controller-manager-pause-893100" [01772675-fb9c-4142-ac0d-984ba9d4c05f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-proxy-c5vrt" [4372194d-1a11-4f50-97a2-a9b8863e1d2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-scheduler-pause-893100" [e18658f2-46b9-4808-a66b-0b99af639027] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0524 20:06:36.793990   10012 system_pods.go:74] duration metric: took 51.9464ms to wait for pod list to return data ...
	I0524 20:06:36.793990   10012 node_conditions.go:102] verifying NodePressure condition ...
	I0524 20:06:36.814990   10012 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 20:06:36.814990   10012 node_conditions.go:123] node cpu capacity is 2
	I0524 20:06:36.814990   10012 node_conditions.go:105] duration metric: took 21.0001ms to run NodePressure ...
	I0524 20:06:36.814990   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:37.878826   10012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.0638371s)
	I0524 20:06:37.878826   10012 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0524 20:06:37.892821   10012 kubeadm.go:787] kubelet initialised
	I0524 20:06:37.892821   10012 kubeadm.go:788] duration metric: took 13.9948ms waiting for restarted kubelet to initialise ...
	I0524 20:06:37.892821   10012 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:37.909538   10012 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:39.475468   10012 pod_ready.go:92] pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:39.475468   10012 pod_ready.go:81] duration metric: took 1.5659319s waiting for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:39.475468   10012 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:41.508151   10012 pod_ready.go:102] pod "etcd-pause-893100" in "kube-system" namespace has status "Ready":"False"
	I0524 20:06:43.512930   10012 pod_ready.go:92] pod "etcd-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.512930   10012 pod_ready.go:81] duration metric: took 4.0374679s waiting for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.512930   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.530172   10012 pod_ready.go:92] pod "kube-apiserver-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.530172   10012 pod_ready.go:81] duration metric: took 17.2413ms waiting for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.530172   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.539962   10012 pod_ready.go:92] pod "kube-controller-manager-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.539962   10012 pod_ready.go:81] duration metric: took 9.7904ms waiting for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.539962   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.549562   10012 pod_ready.go:92] pod "kube-proxy-c5vrt" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.549562   10012 pod_ready.go:81] duration metric: took 9.6002ms waiting for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.549562   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.558572   10012 pod_ready.go:92] pod "kube-scheduler-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.558572   10012 pod_ready.go:81] duration metric: took 9.0098ms waiting for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.558572   10012 pod_ready.go:38] duration metric: took 5.6657589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:43.559557   10012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 20:06:43.577578   10012 ops.go:34] apiserver oom_adj: -16
	I0524 20:06:43.578578   10012 kubeadm.go:640] restartCluster took 32.7775281s
	I0524 20:06:43.578578   10012 kubeadm.go:406] StartCluster complete in 32.8611452s
	I0524 20:06:43.578578   10012 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:43.578578   10012 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:06:43.579549   10012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:43.581558   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 20:06:43.581558   10012 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0524 20:06:43.584564   10012 out.go:177] * Enabled addons: 
	I0524 20:06:43.581558   10012 config.go:182] Loaded profile config "pause-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:43.587581   10012 addons.go:499] enable addons completed in 6.0225ms: enabled=[]
	I0524 20:06:43.594568   10012 kapi.go:59] client config for pause-893100: &rest.Config{Host:"https://172.27.136.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 20:06:43.600557   10012 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-893100" context rescaled to 1 replicas
	I0524 20:06:43.600557   10012 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.136.175 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 20:06:43.604566   10012 out.go:177] * Verifying Kubernetes components...
	I0524 20:06:43.619566   10012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 20:06:43.740039   10012 node_ready.go:35] waiting up to 6m0s for node "pause-893100" to be "Ready" ...
	I0524 20:06:43.740039   10012 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0524 20:06:43.745039   10012 node_ready.go:49] node "pause-893100" has status "Ready":"True"
	I0524 20:06:43.745039   10012 node_ready.go:38] duration metric: took 5.0003ms waiting for node "pause-893100" to be "Ready" ...
	I0524 20:06:43.745039   10012 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:43.928187   10012 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.321285   10012 pod_ready.go:92] pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:44.321360   10012 pod_ready.go:81] duration metric: took 392.9478ms waiting for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.321360   10012 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.714492   10012 pod_ready.go:92] pod "etcd-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:44.714492   10012 pod_ready.go:81] duration metric: took 393.1323ms waiting for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.714492   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.145919   10012 pod_ready.go:92] pod "kube-apiserver-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:45.145919   10012 pod_ready.go:81] duration metric: took 431.4269ms waiting for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.145919   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.513680   10012 pod_ready.go:92] pod "kube-controller-manager-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:45.513680   10012 pod_ready.go:81] duration metric: took 367.7614ms waiting for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.513680   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.908936   10012 pod_ready.go:92] pod "kube-proxy-c5vrt" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:45.908936   10012 pod_ready.go:81] duration metric: took 395.2555ms waiting for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.908936   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:46.322680   10012 pod_ready.go:92] pod "kube-scheduler-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:46.322680   10012 pod_ready.go:81] duration metric: took 413.7445ms waiting for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:46.322680   10012 pod_ready.go:38] duration metric: took 2.5776422s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:46.322680   10012 api_server.go:52] waiting for apiserver process to appear ...
	I0524 20:06:46.333455   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 20:06:46.357073   10012 api_server.go:72] duration metric: took 2.756517s to wait for apiserver process to appear ...
	I0524 20:06:46.357073   10012 api_server.go:88] waiting for apiserver healthz status ...
	I0524 20:06:46.357073   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:46.365971   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 200:
	ok
	I0524 20:06:46.369815   10012 api_server.go:141] control plane version: v1.27.2
	I0524 20:06:46.369927   10012 api_server.go:131] duration metric: took 12.8544ms to wait for apiserver health ...
	I0524 20:06:46.369927   10012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 20:06:46.511715   10012 system_pods.go:59] 6 kube-system pods found
	I0524 20:06:46.511715   10012 system_pods.go:61] "coredns-5d78c9869d-ngwxf" [5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "etcd-pause-893100" [042c47b3-76c5-49a8-be92-2eece9ec9522] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-apiserver-pause-893100" [22d4a079-779f-458c-b323-4c7f578ddf80] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-controller-manager-pause-893100" [01772675-fb9c-4142-ac0d-984ba9d4c05f] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-proxy-c5vrt" [4372194d-1a11-4f50-97a2-a9b8863e1d2e] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-scheduler-pause-893100" [e18658f2-46b9-4808-a66b-0b99af639027] Running
	I0524 20:06:46.511787   10012 system_pods.go:74] duration metric: took 141.8598ms to wait for pod list to return data ...
	I0524 20:06:46.511787   10012 default_sa.go:34] waiting for default service account to be created ...
	I0524 20:06:46.711419   10012 default_sa.go:45] found service account: "default"
	I0524 20:06:46.711419   10012 default_sa.go:55] duration metric: took 199.6318ms for default service account to be created ...
	I0524 20:06:46.711941   10012 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 20:06:46.921968   10012 system_pods.go:86] 6 kube-system pods found
	I0524 20:06:46.921968   10012 system_pods.go:89] "coredns-5d78c9869d-ngwxf" [5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "etcd-pause-893100" [042c47b3-76c5-49a8-be92-2eece9ec9522] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-apiserver-pause-893100" [22d4a079-779f-458c-b323-4c7f578ddf80] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-controller-manager-pause-893100" [01772675-fb9c-4142-ac0d-984ba9d4c05f] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-proxy-c5vrt" [4372194d-1a11-4f50-97a2-a9b8863e1d2e] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-scheduler-pause-893100" [e18658f2-46b9-4808-a66b-0b99af639027] Running
	I0524 20:06:46.921968   10012 system_pods.go:126] duration metric: took 210.0272ms to wait for k8s-apps to be running ...
	I0524 20:06:46.921968   10012 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 20:06:46.933969   10012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 20:06:46.963249   10012 system_svc.go:56] duration metric: took 41.2803ms WaitForService to wait for kubelet.
	I0524 20:06:46.963249   10012 kubeadm.go:581] duration metric: took 3.362693s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 20:06:46.963249   10012 node_conditions.go:102] verifying NodePressure condition ...
	I0524 20:06:47.121890   10012 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 20:06:47.121890   10012 node_conditions.go:123] node cpu capacity is 2
	I0524 20:06:47.122534   10012 node_conditions.go:105] duration metric: took 159.2853ms to run NodePressure ...
	I0524 20:06:47.122658   10012 start.go:228] waiting for startup goroutines ...
	I0524 20:06:47.122658   10012 start.go:233] waiting for cluster config update ...
	I0524 20:06:47.122658   10012 start.go:242] writing updated cluster config ...
	I0524 20:06:47.134855   10012 ssh_runner.go:195] Run: rm -f paused
	I0524 20:06:47.353227   10012 start.go:568] kubectl: 1.18.2, cluster: 1.27.2 (minor skew: 9)
	I0524 20:06:47.356345   10012 out.go:177] 
	W0524 20:06:47.359179   10012 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.2.
	! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 20:06:47.364178   10012 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 20:06:47.367185   10012 out.go:177] * Done! kubectl is now configured to use "pause-893100" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-893100 -n pause-893100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-893100 -n pause-893100: (5.5643901s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-893100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-893100 logs -n 25: (5.2619768s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p multinode-237000-m02        | multinode-237000-m02      | minikube1\jenkins | v1.30.1 | 24 May 23 19:48 UTC |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p multinode-237000-m03        | multinode-237000-m03      | minikube1\jenkins | v1.30.1 | 24 May 23 19:48 UTC | 24 May 23 19:50 UTC |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| node    | add -p multinode-237000        | multinode-237000          | minikube1\jenkins | v1.30.1 | 24 May 23 19:50 UTC |                     |
	| delete  | -p multinode-237000-m03        | multinode-237000-m03      | minikube1\jenkins | v1.30.1 | 24 May 23 19:50 UTC | 24 May 23 19:50 UTC |
	| delete  | -p multinode-237000            | multinode-237000          | minikube1\jenkins | v1.30.1 | 24 May 23 19:51 UTC | 24 May 23 19:51 UTC |
	| start   | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:51 UTC | 24 May 23 19:55 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr              |                           |                   |         |                     |                     |
	|         | --wait=true --preload=false    |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                           |                   |         |                     |                     |
	| ssh     | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:55 UTC | 24 May 23 19:55 UTC |
	|         | -- docker pull                 |                           |                   |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox    |                           |                   |         |                     |                     |
	| stop    | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:55 UTC | 24 May 23 19:55 UTC |
	| start   | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:55 UTC | 24 May 23 19:57 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --wait=true --driver=hyperv    |                           |                   |         |                     |                     |
	| ssh     | -p test-preload-134100 --      | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:57 UTC | 24 May 23 19:57 UTC |
	|         | docker images                  |                           |                   |         |                     |                     |
	| delete  | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:57 UTC | 24 May 23 19:57 UTC |
	| start   | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:57 UTC | 24 May 23 19:59 UTC |
	|         | --memory=2048 --driver=hyperv  |                           |                   |         |                     |                     |
	| stop    | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:59 UTC | 24 May 23 19:59 UTC |
	|         | --schedule 5m                  |                           |                   |         |                     |                     |
	| ssh     | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:59 UTC | 24 May 23 19:59 UTC |
	|         | -- sudo systemctl show         |                           |                   |         |                     |                     |
	|         | minikube-scheduled-stop        |                           |                   |         |                     |                     |
	|         | --no-page                      |                           |                   |         |                     |                     |
	| stop    | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:59 UTC | 24 May 23 20:00 UTC |
	|         | --schedule 5s                  |                           |                   |         |                     |                     |
	| delete  | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC | 24 May 23 20:01 UTC |
	| start   | -p offline-docker-893100       | offline-docker-893100     | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC | 24 May 23 20:05 UTC |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --memory=2048 --wait=true      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p pause-893100 --memory=2048  | pause-893100              | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC | 24 May 23 20:03 UTC |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv     |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-893100         | NoKubernetes-893100       | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC |                     |
	|         | --no-kubernetes                |                           |                   |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-893100         | NoKubernetes-893100       | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p pause-893100                | pause-893100              | minikube1\jenkins | v1.30.1 | 24 May 23 20:03 UTC | 24 May 23 20:06 UTC |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-893100      | running-upgrade-893100    | minikube1\jenkins | v1.30.1 | 24 May 23 20:04 UTC |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p offline-docker-893100       | offline-docker-893100     | minikube1\jenkins | v1.30.1 | 24 May 23 20:05 UTC | 24 May 23 20:06 UTC |
	| start   | -p force-systemd-flag-052200   | force-systemd-flag-052200 | minikube1\jenkins | v1.30.1 | 24 May 23 20:06 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p NoKubernetes-893100         | NoKubernetes-893100       | minikube1\jenkins | v1.30.1 | 24 May 23 20:06 UTC |                     |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 20:06:00
	Running on machine: minikube1
	Binary: Built with gc go1.20.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 20:06:00.194871    7000 out.go:296] Setting OutFile to fd 1632 ...
	I0524 20:06:00.277864    7000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:06:00.277864    7000 out.go:309] Setting ErrFile to fd 1636...
	I0524 20:06:00.277864    7000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:06:00.301877    7000 out.go:303] Setting JSON to false
	I0524 20:06:00.305881    7000 start.go:125] hostinfo: {"hostname":"minikube1","uptime":7273,"bootTime":1684951486,"procs":160,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 20:06:00.305881    7000 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 20:06:00.310903    7000 out.go:177] * [force-systemd-flag-052200] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 20:06:00.313898    7000 notify.go:220] Checking for updates...
	I0524 20:06:00.315886    7000 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:06:00.318874    7000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 20:06:00.322896    7000 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 20:06:00.328610    7000 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 20:06:00.332277    7000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 20:05:59.168692    4556 ssh_runner.go:235] Completed: sudo systemctl restart docker: (19.7086593s)
	I0524 20:05:59.168692    4556 start.go:481] detecting cgroup driver to use...
	I0524 20:05:59.168692    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:05:59.217719    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 20:05:59.258739    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 20:05:59.280729    4556 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 20:05:59.289804    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 20:05:59.323853    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:05:59.359818    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 20:05:59.391294    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:05:59.422332    4556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 20:05:59.453292    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 20:05:59.487316    4556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 20:05:59.514274    4556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 20:05:59.541947    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:05:59.734524    4556 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 20:05:59.766734    4556 start.go:481] detecting cgroup driver to use...
	I0524 20:05:59.780482    4556 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 20:05:59.812886    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:05:59.846891    4556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 20:05:59.880996    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:05:59.915545    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:05:59.951453    4556 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 20:06:00.025439    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:06:00.053617    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:06:00.125621    4556 ssh_runner.go:195] Run: which cri-dockerd
	I0524 20:06:00.150623    4556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 20:06:00.170069    4556 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 20:06:00.228863    4556 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 20:06:00.484236    4556 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 20:06:00.699121    4556 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 20:06:00.699121    4556 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 20:06:00.749088    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:00.936425    4556 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 20:06:00.337717    7000 config.go:182] Loaded profile config "NoKubernetes-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:00.339401    7000 config.go:182] Loaded profile config "pause-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:00.340198    7000 config.go:182] Loaded profile config "running-upgrade-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0524 20:06:00.340198    7000 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 20:06:02.236727    7000 out.go:177] * Using the hyperv driver based on user configuration
	I0524 20:06:02.241752    7000 start.go:295] selected driver: hyperv
	I0524 20:06:02.241752    7000 start.go:870] validating driver "hyperv" against <nil>
	I0524 20:06:02.241752    7000 start.go:881] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 20:06:02.308741    7000 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 20:06:02.309722    7000 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 20:06:02.309722    7000 cni.go:84] Creating CNI manager for ""
	I0524 20:06:02.309722    7000 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:02.309722    7000 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 20:06:02.309722    7000 start_flags.go:319] config:
	{Name:force-systemd-flag-052200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-052200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:06:02.310730    7000 iso.go:125] acquiring lock: {Name:mk3b29db369ab0f922ac5eeb788beee87e18ec94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:06:02.314735    7000 out.go:177] * Starting control plane node force-systemd-flag-052200 in cluster force-systemd-flag-052200
	I0524 20:06:02.953206    4556 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.0167823s)
	I0524 20:06:02.963444    4556 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:03.157701    4556 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 20:06:03.353266    4556 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:03.569588    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:03.748952    4556 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 20:06:03.798296    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:03.976157    4556 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 20:06:04.098802    4556 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 20:06:04.111837    4556 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 20:06:04.121845    4556 start.go:549] Will wait 60s for crictl version
	I0524 20:06:04.130849    4556 ssh_runner.go:195] Run: which crictl
	I0524 20:06:04.150684    4556 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 20:06:04.220851    4556 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 20:06:04.229226    4556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:04.282631    4556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:02.233377   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:02.233557   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:02.233557   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:03.452998   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:03.452998   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:03.452998   10868 provision.go:138] copyHostCerts
	I0524 20:06:03.452998   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 20:06:03.453986   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 20:06:03.453986   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 20:06:03.455998   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 20:06:03.455998   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 20:06:03.455998   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 20:06:03.456992   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 20:06:03.456992   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 20:06:03.457996   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 20:06:03.459004   10868 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-893100 san=[172.27.134.82 172.27.134.82 localhost 127.0.0.1 minikube running-upgrade-893100]
	I0524 20:06:03.728326   10868 provision.go:172] copyRemoteCerts
	I0524 20:06:03.737398   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 20:06:03.737398   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:04.586035   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:04.586035   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:04.586128   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:02.321733    7000 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:06:02.321733    7000 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0524 20:06:02.321733    7000 cache.go:57] Caching tarball of preloaded images
	I0524 20:06:02.322748    7000 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0524 20:06:02.322748    7000 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 20:06:02.322748    7000 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-052200\config.json ...
	I0524 20:06:02.322748    7000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-052200\config.json: {Name:mka0a0923dabc11ea4915f2cdd814ce71e98be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:02.324750    7000 cache.go:195] Successfully downloaded all kic artifacts
	I0524 20:06:02.324750    7000 start.go:364] acquiring machines lock for force-systemd-flag-052200: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 20:06:04.337415    4556 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 20:06:04.337943    4556 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 20:06:04.345961    4556 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 20:06:04.345961    4556 ip.go:210] interface addr: 172.27.128.1/20
	I0524 20:06:04.355071    4556 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 20:06:04.361757    4556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 20:06:04.383636    4556 localpath.go:92] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\client.crt -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\client.crt
	I0524 20:06:04.385029    4556 localpath.go:117] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\client.key -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\client.key
	I0524 20:06:04.386571    4556 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:06:04.392558    4556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:04.429414    4556 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:04.429414    4556 docker.go:563] Images already preloaded, skipping extraction
	I0524 20:06:04.436477    4556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:04.476086    4556 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:04.476086    4556 cache_images.go:84] Images are preloaded, skipping loading
	I0524 20:06:04.482629    4556 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 20:06:04.532626    4556 cni.go:84] Creating CNI manager for ""
	I0524 20:06:04.532626    4556 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:04.532626    4556 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 20:06:04.532626    4556 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.134.18 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-893100 NodeName:NoKubernetes-893100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.134.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.134.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 20:06:04.532626    4556 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.134.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "NoKubernetes-893100"
	  kubeletExtraArgs:
	    node-ip: 172.27.134.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.134.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 20:06:04.532626    4556 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=NoKubernetes-893100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.134.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:NoKubernetes-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 20:06:04.541620    4556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 20:06:04.568761    4556 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 20:06:04.581711    4556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 20:06:04.605115    4556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0524 20:06:04.638932    4556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 20:06:04.671524    4556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0524 20:06:04.718006    4556 ssh_runner.go:195] Run: grep 172.27.134.18	control-plane.minikube.internal$ /etc/hosts
	I0524 20:06:04.724023    4556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.134.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 20:06:04.746376    4556 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100 for IP: 172.27.134.18
	I0524 20:06:04.746462    4556 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:04.747226    4556 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 20:06:04.747373    4556 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 20:06:04.748219    4556 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\client.key
	I0524 20:06:04.748219    4556 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56
	I0524 20:06:04.748755    4556 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56 with IP's: [172.27.134.18 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 20:06:04.971535    4556 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56 ...
	I0524 20:06:04.972539    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56: {Name:mk3560aeed00029897190182186ed8cda7ba9211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:04.973603    4556 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56 ...
	I0524 20:06:04.973603    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56: {Name:mk0dcda055aab9733580bdf04f9905181c59f6fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:04.974581    4556 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt
	I0524 20:06:04.986543    4556 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key
	I0524 20:06:04.987543    4556 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key
	I0524 20:06:04.987543    4556 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt with IP's: []
	I0524 20:06:05.209022    4556 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt ...
	I0524 20:06:05.209022    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt: {Name:mk855573f394b139659b125b2169fcb2c42c1cda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:05.210021    4556 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key ...
	I0524 20:06:05.210021    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key: {Name:mkf5e64627dd020f5c501fa0f12c3043f4dd0c20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 20:06:05.222128    4556 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 20:06:05.223128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 20:06:05.223128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 20:06:05.224875    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 20:06:05.272003    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 20:06:05.319493    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 20:06:05.370453    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 20:06:05.417021    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 20:06:05.457401    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 20:06:05.500182    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 20:06:05.544310    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 20:06:05.591935    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 20:06:05.638549    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 20:06:05.680340    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 20:06:05.726205    4556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 20:06:05.767423    4556 ssh_runner.go:195] Run: openssl version
	I0524 20:06:05.784749    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 20:06:05.813869    4556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:05.821773    4556 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:05.833861    4556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:05.850853    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 20:06:05.878304    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 20:06:05.906297    4556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 20:06:05.914309    4556 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 20:06:05.925710    4556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 20:06:05.943582    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 20:06:05.975648    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 20:06:06.015170    4556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 20:06:06.023978    4556 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 20:06:06.035850    4556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 20:06:06.056997    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 20:06:06.094832    4556 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 20:06:06.103406    4556 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 20:06:06.104848    4556 kubeadm.go:404] StartCluster: {Name:NoKubernetes-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.27.2 ClusterName:NoKubernetes-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.134.18 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:06:06.114867    4556 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:06.158802    4556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 20:06:06.188512    4556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:06.213818    4556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 20:06:06.237285    4556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 20:06:06.237285    4556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 20:06:07.025821   10012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.9420978s)
	I0524 20:06:07.035631   10012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:07.286145   10012 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 20:06:07.528622   10012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:07.847079   10012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:08.145384   10012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 20:06:08.218382   10012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:08.594082   10012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 20:06:08.952592   10012 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 20:06:08.962588   10012 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 20:06:08.978277   10012 start.go:549] Will wait 60s for crictl version
	I0524 20:06:08.991301   10012 ssh_runner.go:195] Run: which crictl
	I0524 20:06:09.010872   10012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 20:06:09.158427   10012 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 20:06:09.167406   10012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:09.235215   10012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:05.796410   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:05.796598   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:05.797003   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:05.917620   10868 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.1801s)
	I0524 20:06:05.918104   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 20:06:05.946671   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0524 20:06:05.972632   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0524 20:06:05.998139   10868 provision.go:86] duration metric: configureAuth took 6.861093s
	I0524 20:06:05.998139   10868 buildroot.go:189] setting minikube options for container-runtime
	I0524 20:06:05.998139   10868 config.go:182] Loaded profile config "running-upgrade-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0524 20:06:05.998139   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:08.003023   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:08.003096   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:08.008201   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:08.008892   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:08.009428   10868 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 20:06:08.198102   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 20:06:08.198102   10868 buildroot.go:70] root file system type: tmpfs
	I0524 20:06:08.198398   10868 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 20:06:08.198501   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:09.293679   10012 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 20:06:09.293679   10012 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 20:06:09.303152   10012 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 20:06:09.304149   10012 ip.go:210] interface addr: 172.27.128.1/20
	I0524 20:06:09.316137   10012 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 20:06:09.324681   10012 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:06:09.332631   10012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:09.369439   10012 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:09.369531   10012 docker.go:563] Images already preloaded, skipping extraction
	I0524 20:06:09.378738   10012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:09.416677   10012 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:09.416752   10012 cache_images.go:84] Images are preloaded, skipping loading
	I0524 20:06:09.424464   10012 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 20:06:09.475847   10012 cni.go:84] Creating CNI manager for ""
	I0524 20:06:09.475914   10012 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:09.475914   10012 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 20:06:09.475982   10012 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.136.175 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-893100 NodeName:pause-893100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.136.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.136.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 20:06:09.476249   10012 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.136.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-893100"
	  kubeletExtraArgs:
	    node-ip: 172.27.136.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.136.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 20:06:09.476452   10012 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-893100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.136.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:pause-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 20:06:09.485138   10012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 20:06:09.504201   10012 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 20:06:09.513369   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 20:06:09.532371   10012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0524 20:06:09.564859   10012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 20:06:09.594750   10012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0524 20:06:09.633134   10012 ssh_runner.go:195] Run: grep 172.27.136.175	control-plane.minikube.internal$ /etc/hosts
	I0524 20:06:09.645572   10012 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100 for IP: 172.27.136.175
	I0524 20:06:09.645572   10012 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:09.646787   10012 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 20:06:09.647146   10012 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 20:06:09.648111   10012 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\client.key
	I0524 20:06:09.648492   10012 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.key.85da34c2
	I0524 20:06:09.648994   10012 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.key
	I0524 20:06:09.650350   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 20:06:09.650774   10012 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 20:06:09.650774   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 20:06:09.651481   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 20:06:09.651481   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 20:06:09.652160   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 20:06:09.652338   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 20:06:09.654026   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 20:06:09.697087   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 20:06:09.743829   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 20:06:09.789241   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 20:06:09.832568   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 20:06:09.878635   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 20:06:09.922855   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 20:06:09.967076   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 20:06:10.011306   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 20:06:10.054388   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 20:06:10.101750   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 20:06:10.152071   10012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 20:06:06.507025    4556 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 20:06:06.508025    4556 kubeadm.go:322] W0524 20:06:06.503827    1529 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 20:06:10.225770   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:10.225770   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:10.229774   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:10.229774   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:10.229774   10868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 20:06:10.399947   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 20:06:10.399947   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:12.467458   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:12.467458   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:12.471467   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:12.472454   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:12.472454   10868 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 20:06:10.211751   10012 ssh_runner.go:195] Run: openssl version
	I0524 20:06:10.229774   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 20:06:10.266275   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.277278   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.287269   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.310865   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 20:06:10.340942   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 20:06:10.366956   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.375774   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.384953   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.405947   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 20:06:10.444116   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 20:06:10.474716   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.482279   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.492728   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.512758   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 20:06:10.583724   10012 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 20:06:10.605736   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0524 20:06:10.624468   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0524 20:06:10.643480   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0524 20:06:10.669632   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0524 20:06:10.689627   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0524 20:06:10.708343   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0524 20:06:10.717465   10012 kubeadm.go:404] StartCluster: {Name:pause-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:pause-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.136.175 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-
security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:06:10.726272   10012 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:10.777160   10012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 20:06:10.801082   10012 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0524 20:06:10.801082   10012 kubeadm.go:636] restartCluster start
	I0524 20:06:10.811686   10012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0524 20:06:10.843193   10012 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:10.843880   10012 kubeconfig.go:92] found "pause-893100" server: "https://172.27.136.175:8443"
	I0524 20:06:10.845890   10012 kapi.go:59] client config for pause-893100: &rest.Config{Host:"https://172.27.136.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 20:06:10.856500   10012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0524 20:06:10.883825   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:10.893143   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:10.920443   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.421350   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:11.430355   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:11.451625   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.929976   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:11.939775   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:11.970377   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:12.436501   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:12.447814   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:12.471467   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:12.926501   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:12.937214   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:12.960579   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:13.427159   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:13.438876   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:13.464698   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:13.929541   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:13.947415   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:13.969612   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:14.432643   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:14.442446   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:14.467022   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:14.925651   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:14.938049   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:14.962986   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.632161    4556 kubeadm.go:322] W0524 20:06:11.628114    1529 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 20:06:15.430640   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:15.441565   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:15.463640   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:15.932254   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:15.947114   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:15.968471   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:16.434507   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:16.449727   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:16.473779   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:16.923657   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:16.934780   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:16.956350   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:17.429985   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:17.441030   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:17.464322   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:17.931369   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:17.941210   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:17.966233   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:18.435534   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:18.446265   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:18.467386   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:18.923047   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:18.934000   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:18.958043   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:19.427096   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:19.438136   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:19.462070   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:19.928866   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:19.938699   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:19.960165   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:24.351871    4556 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 20:06:24.351965    4556 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 20:06:24.352240    4556 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 20:06:24.352574    4556 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 20:06:24.352756    4556 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 20:06:24.352990    4556 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 20:06:24.355726    4556 out.go:204]   - Generating certificates and keys ...
	I0524 20:06:24.355726    4556 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 20:06:24.355726    4556 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost nokubernetes-893100] and IPs [172.27.134.18 127.0.0.1 ::1]
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost nokubernetes-893100] and IPs [172.27.134.18 127.0.0.1 ::1]
	I0524 20:06:24.357726    4556 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 20:06:24.357726    4556 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 20:06:24.357726    4556 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 20:06:24.358732    4556 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 20:06:24.362720    4556 out.go:204]   - Booting up control plane ...
	I0524 20:06:24.362720    4556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 20:06:24.362720    4556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 20:06:24.362720    4556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 20:06:24.362720    4556 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 20:06:24.363750    4556 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 20:06:24.363750    4556 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.003819 seconds
	I0524 20:06:24.363750    4556 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 20:06:24.363750    4556 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 20:06:24.363750    4556 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 20:06:24.364733    4556 kubeadm.go:322] [mark-control-plane] Marking the node nokubernetes-893100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 20:06:24.364733    4556 kubeadm.go:322] [bootstrap-token] Using token: vpphdh.2ag8sqvvjsk8wehw
	I0524 20:06:24.367712    4556 out.go:204]   - Configuring RBAC rules ...
	I0524 20:06:24.367712    4556 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 20:06:24.367712    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 20:06:24.369771    4556 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 20:06:24.369771    4556 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 20:06:24.369771    4556 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 20:06:24.369771    4556 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 20:06:24.369771    4556 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 20:06:24.370733    4556 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 20:06:24.370733    4556 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 20:06:24.370733    4556 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vpphdh.2ag8sqvvjsk8wehw \
	I0524 20:06:24.370733    4556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 \
	I0524 20:06:24.370733    4556 kubeadm.go:322] 	--control-plane 
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vpphdh.2ag8sqvvjsk8wehw \
	I0524 20:06:24.370733    4556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 
	I0524 20:06:24.370733    4556 cni.go:84] Creating CNI manager for ""
	I0524 20:06:24.370733    4556 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:24.373704    4556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 20:06:20.433357   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:20.444510   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:20.474481   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:20.890580   10012 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0524 20:06:20.890652   10012 kubeadm.go:1123] stopping kube-system containers ...
	I0524 20:06:20.900473   10012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:20.945688   10012 docker.go:459] Stopping containers: [ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296]
	I0524 20:06:20.953822   10012 ssh_runner.go:195] Run: docker stop ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296
	I0524 20:06:24.387732    4556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 20:06:24.413737    4556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 20:06:24.460770    4556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 20:06:24.470743    4556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=NoKubernetes-893100 minikube.k8s.io/updated_at=2023_05_24T20_06_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 20:06:24.471712    4556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 20:06:24.559407    4556 ops.go:34] apiserver oom_adj: -16
	I0524 20:06:24.979604    4556 kubeadm.go:1076] duration metric: took 518.8342ms to wait for elevateKubeSystemPrivileges.
	I0524 20:06:24.979604    4556 kubeadm.go:406] StartCluster complete in 18.8747661s
	I0524 20:06:24.979604    4556 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:24.979604    4556 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:06:24.981234    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:24.983214    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 20:06:24.983214    4556 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0524 20:06:24.983214    4556 addons.go:66] Setting storage-provisioner=true in profile "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 addons.go:228] Setting addon storage-provisioner=true in "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 addons.go:66] Setting default-storageclass=true in profile "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 host.go:66] Checking if "NoKubernetes-893100" exists ...
	I0524 20:06:24.983214    4556 config.go:182] Loaded profile config "NoKubernetes-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:24.984213    4556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-893100 ).state
	I0524 20:06:24.984213    4556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-893100 ).state
	I0524 20:06:25.211785    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 20:06:25.563981    4556 kapi.go:248] "coredns" deployment in "kube-system" namespace and "NoKubernetes-893100" context rescaled to 1 replicas
	I0524 20:06:25.563981    4556 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.134.18 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 20:06:25.568118    4556 out.go:177] * Verifying Kubernetes components...
	I0524 20:06:26.340931   10012 ssh_runner.go:235] Completed: docker stop ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296: (5.387112s)
	I0524 20:06:26.353946   10012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0524 20:06:26.415998   10012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 20:06:26.435993   10012 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 24 20:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May 24 20:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 May 24 20:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 May 24 20:03 /etc/kubernetes/scheduler.conf
	
	I0524 20:06:26.454002   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0524 20:06:26.488848   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0524 20:06:26.521267   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0524 20:06:26.538077   10012 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:26.548788   10012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0524 20:06:26.582717   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0524 20:06:26.611300   10012 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:26.636218   10012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0524 20:06:26.674616   10012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:26.696923   10012 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:26.696923   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:26.808540   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:28.718776   10012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.910239s)
	I0524 20:06:28.718887   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.128272   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.291393   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.441004   10012 api_server.go:52] waiting for apiserver process to appear ...
	I0524 20:06:29.456788   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 20:06:29.478798   10012 api_server.go:72] duration metric: took 37.7941ms to wait for apiserver process to appear ...
	I0524 20:06:29.478798   10012 api_server.go:88] waiting for apiserver healthz status ...
	I0524 20:06:29.478798   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:26.560253   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 20:06:26.560426   10868 machine.go:91] provisioned docker machine in 32.1769885s
	I0524 20:06:26.560426   10868 start.go:300] post-start starting for "running-upgrade-893100" (driver="hyperv")
	I0524 20:06:26.560426   10868 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 20:06:26.575492   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 20:06:26.575492   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:28.838356   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:28.838356   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:28.838933   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:28.959761   10868 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.3841871s)
	I0524 20:06:28.972843   10868 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 20:06:28.980533   10868 info.go:137] Remote host: Buildroot 2019.02.7
	I0524 20:06:28.980613   10868 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 20:06:28.980976   10868 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 20:06:28.982117   10868 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 20:06:28.995954   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 20:06:29.013022   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 20:06:29.050228   10868 start.go:303] post-start completed in 2.4898051s
	I0524 20:06:29.050228   10868 fix.go:57] fixHost completed within 36.3092251s
	I0524 20:06:29.050228   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:29.975618   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:29.975688   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:29.975688   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:33.972428    7000 start.go:368] acquired machines lock for "force-systemd-flag-052200" in 31.6477009s
	I0524 20:06:33.972428    7000 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-052200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-052200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 20:06:33.972428    7000 start.go:125] createHost starting for "" (driver="hyperv")
	I0524 20:06:31.397968   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:31.398200   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:31.402578   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:31.403192   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:31.403192   10868 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 20:06:31.691038   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684958791.670023101
	
	I0524 20:06:31.691103   10868 fix.go:207] guest clock: 1684958791.670023101
	I0524 20:06:31.691103   10868 fix.go:220] Guest: 2023-05-24 20:06:31.670023101 +0000 UTC Remote: 2023-05-24 20:06:29.0502283 +0000 UTC m=+109.032679301 (delta=2.619794801s)
	I0524 20:06:31.691177   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:32.544937   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:32.545011   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:32.545381   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:33.789345   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:33.789403   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:33.795876   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:33.796952   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:33.797016   10868 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1684958791
	I0524 20:06:33.971448   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May 24 20:06:31 UTC 2023
	
	I0524 20:06:33.971448   10868 fix.go:227] clock set: Wed May 24 20:06:31 UTC 2023
	 (err=<nil>)
	I0524 20:06:33.971448   10868 start.go:83] releasing machines lock for "running-upgrade-893100", held for 41.2304529s
	I0524 20:06:33.972428   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:34.889266   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:34.889266   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:34.889266   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:34.493381   10012 api_server.go:269] stopped: https://172.27.136.175:8443/healthz: Get "https://172.27.136.175:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0524 20:06:34.998973   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:35.017204   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 20:06:35.017271   10012 api_server.go:103] status: https://172.27.136.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 20:06:33.976431    7000 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0524 20:06:33.976431    7000 start.go:159] libmachine.API.Create for "force-systemd-flag-052200" (driver="hyperv")
	I0524 20:06:33.976431    7000 client.go:168] LocalClient.Create starting
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Decoding PEM data...
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Parsing certificate...
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Decoding PEM data...
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Parsing certificate...
	I0524 20:06:33.978420    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0524 20:06:34.493381    7000 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0524 20:06:34.493381    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:34.493381    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0524 20:06:35.503294   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:35.519446   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 20:06:35.519446   10012 api_server.go:103] status: https://172.27.136.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 20:06:36.006618   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:36.020613   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 20:06:36.020613   10012 api_server.go:103] status: https://172.27.136.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 20:06:36.496433   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:36.510434   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 200:
	ok
	I0524 20:06:36.539217   10012 api_server.go:141] control plane version: v1.27.2
	I0524 20:06:36.539217   10012 api_server.go:131] duration metric: took 7.06043s to wait for apiserver health ...
	I0524 20:06:36.539217   10012 cni.go:84] Creating CNI manager for ""
	I0524 20:06:36.539217   10012 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:36.543083   10012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 20:06:36.566890   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 20:06:36.594851   10012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 20:06:36.742044   10012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 20:06:36.793990   10012 system_pods.go:59] 6 kube-system pods found
	I0524 20:06:36.793990   10012 system_pods.go:61] "coredns-5d78c9869d-ngwxf" [5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0524 20:06:36.793990   10012 system_pods.go:61] "etcd-pause-893100" [042c47b3-76c5-49a8-be92-2eece9ec9522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-apiserver-pause-893100" [22d4a079-779f-458c-b323-4c7f578ddf80] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-controller-manager-pause-893100" [01772675-fb9c-4142-ac0d-984ba9d4c05f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-proxy-c5vrt" [4372194d-1a11-4f50-97a2-a9b8863e1d2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-scheduler-pause-893100" [e18658f2-46b9-4808-a66b-0b99af639027] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0524 20:06:36.793990   10012 system_pods.go:74] duration metric: took 51.9464ms to wait for pod list to return data ...
	I0524 20:06:36.793990   10012 node_conditions.go:102] verifying NodePressure condition ...
	I0524 20:06:36.814990   10012 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 20:06:36.814990   10012 node_conditions.go:123] node cpu capacity is 2
	I0524 20:06:36.814990   10012 node_conditions.go:105] duration metric: took 21.0001ms to run NodePressure ...
	I0524 20:06:36.814990   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:37.878826   10012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.0638371s)
	I0524 20:06:37.878826   10012 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0524 20:06:37.892821   10012 kubeadm.go:787] kubelet initialised
	I0524 20:06:37.892821   10012 kubeadm.go:788] duration metric: took 13.9948ms waiting for restarted kubelet to initialise ...
	I0524 20:06:37.892821   10012 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:37.909538   10012 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:39.475468   10012 pod_ready.go:92] pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:39.475468   10012 pod_ready.go:81] duration metric: took 1.5659319s waiting for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:39.475468   10012 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:36.338371   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:36.338619   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:36.342252   10868 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 20:06:36.343221   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:36.353265   10868 ssh_runner.go:195] Run: cat /version.json
	I0524 20:06:36.353265   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:37.368370   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:37.368370   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:37.368370   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:37.406338   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:37.406338   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:37.406338   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:38.901480   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:38.901480   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:38.902494   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:38.996335   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:38.996335   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:38.996335   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:39.034420   10868 ssh_runner.go:235] Completed: cat /version.json: (2.6811586s)
	W0524 20:06:39.034420   10868 start.go:409] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0524 20:06:39.048577   10868 ssh_runner.go:195] Run: systemctl --version
	I0524 20:06:39.087442   10868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 20:06:39.172435   10868 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 20:06:39.172435   10868 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.8301879s)
	I0524 20:06:39.190434   10868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0524 20:06:39.221212   10868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0524 20:06:39.230789   10868 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0524 20:06:39.230789   10868 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0524 20:06:39.230789   10868 start.go:481] detecting cgroup driver to use...
	I0524 20:06:39.231791   10868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:06:39.256768   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0524 20:06:39.278742   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 20:06:39.290371   10868 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 20:06:39.306364   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 20:06:39.341273   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:06:39.366461   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 20:06:39.395843   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:06:39.431834   10868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 20:06:39.469460   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 20:06:39.528350   10868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 20:06:39.559807   10868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 20:06:39.583690   10868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:39.922278   10868 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 20:06:39.960021   10868 start.go:481] detecting cgroup driver to use...
	I0524 20:06:39.978881   10868 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 20:06:40.010550   10868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:06:40.036238   10868 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 20:06:40.110050   10868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:06:40.139694   10868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:06:40.157646   10868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:06:40.187302   10868 ssh_runner.go:195] Run: which cri-dockerd
	I0524 20:06:35.360287    7000 main.go:141] libmachine: [stdout =====>] : False
	
	I0524 20:06:35.360287    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:35.360287    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0524 20:06:36.022609    7000 main.go:141] libmachine: [stdout =====>] : True
	
	I0524 20:06:36.022609    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:36.022609    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0524 20:06:38.316496    7000 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0524 20:06:38.316583    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:38.319761    7000 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.30.1-1684536668-16501-amd64.iso...
	I0524 20:06:38.854487    7000 main.go:141] libmachine: Creating SSH key...
	I0524 20:06:39.080432    7000 main.go:141] libmachine: Creating VM...
	I0524 20:06:39.080432    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0524 20:06:41.508151   10012 pod_ready.go:102] pod "etcd-pause-893100" in "kube-system" namespace has status "Ready":"False"
	I0524 20:06:43.512930   10012 pod_ready.go:92] pod "etcd-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.512930   10012 pod_ready.go:81] duration metric: took 4.0374679s waiting for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.512930   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.530172   10012 pod_ready.go:92] pod "kube-apiserver-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.530172   10012 pod_ready.go:81] duration metric: took 17.2413ms waiting for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.530172   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.539962   10012 pod_ready.go:92] pod "kube-controller-manager-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.539962   10012 pod_ready.go:81] duration metric: took 9.7904ms waiting for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.539962   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.549562   10012 pod_ready.go:92] pod "kube-proxy-c5vrt" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.549562   10012 pod_ready.go:81] duration metric: took 9.6002ms waiting for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.549562   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.558572   10012 pod_ready.go:92] pod "kube-scheduler-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.558572   10012 pod_ready.go:81] duration metric: took 9.0098ms waiting for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.558572   10012 pod_ready.go:38] duration metric: took 5.6657589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:43.559557   10012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 20:06:43.577578   10012 ops.go:34] apiserver oom_adj: -16
	I0524 20:06:43.578578   10012 kubeadm.go:640] restartCluster took 32.7775281s
	I0524 20:06:43.578578   10012 kubeadm.go:406] StartCluster complete in 32.8611452s
	I0524 20:06:43.578578   10012 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:43.578578   10012 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:06:43.579549   10012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:43.581558   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 20:06:43.581558   10012 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0524 20:06:43.584564   10012 out.go:177] * Enabled addons: 
	I0524 20:06:43.581558   10012 config.go:182] Loaded profile config "pause-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:43.587581   10012 addons.go:499] enable addons completed in 6.0225ms: enabled=[]
	I0524 20:06:43.594568   10012 kapi.go:59] client config for pause-893100: &rest.Config{Host:"https://172.27.136.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 20:06:43.600557   10012 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-893100" context rescaled to 1 replicas
	I0524 20:06:43.600557   10012 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.136.175 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 20:06:43.604566   10012 out.go:177] * Verifying Kubernetes components...
	I0524 20:06:43.619566   10012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 20:06:43.740039   10012 node_ready.go:35] waiting up to 6m0s for node "pause-893100" to be "Ready" ...
	I0524 20:06:43.740039   10012 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0524 20:06:43.745039   10012 node_ready.go:49] node "pause-893100" has status "Ready":"True"
	I0524 20:06:43.745039   10012 node_ready.go:38] duration metric: took 5.0003ms waiting for node "pause-893100" to be "Ready" ...
	I0524 20:06:43.745039   10012 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:43.928187   10012 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.321285   10012 pod_ready.go:92] pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:44.321360   10012 pod_ready.go:81] duration metric: took 392.9478ms waiting for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.321360   10012 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.714492   10012 pod_ready.go:92] pod "etcd-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:44.714492   10012 pod_ready.go:81] duration metric: took 393.1323ms waiting for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.714492   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.145919   10012 pod_ready.go:92] pod "kube-apiserver-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:45.145919   10012 pod_ready.go:81] duration metric: took 431.4269ms waiting for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.145919   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:40.203647   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 20:06:40.217401   10868 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 20:06:40.246006   10868 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 20:06:40.528783   10868 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 20:06:40.824730   10868 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 20:06:40.824730   10868 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 20:06:40.859463   10868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:41.276580   10868 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 20:06:40.942714    7000 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0524 20:06:40.942714    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:40.942860    7000 main.go:141] libmachine: Using switch "Default Switch"
	I0524 20:06:40.942968    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0524 20:06:41.747208    7000 main.go:141] libmachine: [stdout =====>] : True
	
	I0524 20:06:41.747286    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:41.747286    7000 main.go:141] libmachine: Creating VHD
	I0524 20:06:41.747286    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0524 20:06:43.547605    7000 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DF99FB25-92D3-4C42-96BF-167E883A3511
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0524 20:06:43.547605    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:43.547605    7000 main.go:141] libmachine: Writing magic tar header
	I0524 20:06:43.547605    7000 main.go:141] libmachine: Writing SSH key tar header
	I0524 20:06:43.556553    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0524 20:06:45.513680   10012 pod_ready.go:92] pod "kube-controller-manager-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:45.513680   10012 pod_ready.go:81] duration metric: took 367.7614ms waiting for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.513680   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.908936   10012 pod_ready.go:92] pod "kube-proxy-c5vrt" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:45.908936   10012 pod_ready.go:81] duration metric: took 395.2555ms waiting for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.908936   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:46.322680   10012 pod_ready.go:92] pod "kube-scheduler-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:46.322680   10012 pod_ready.go:81] duration metric: took 413.7445ms waiting for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:46.322680   10012 pod_ready.go:38] duration metric: took 2.5776422s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:46.322680   10012 api_server.go:52] waiting for apiserver process to appear ...
	I0524 20:06:46.333455   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 20:06:46.357073   10012 api_server.go:72] duration metric: took 2.756517s to wait for apiserver process to appear ...
	I0524 20:06:46.357073   10012 api_server.go:88] waiting for apiserver healthz status ...
	I0524 20:06:46.357073   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:46.365971   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 200:
	ok
	I0524 20:06:46.369815   10012 api_server.go:141] control plane version: v1.27.2
	I0524 20:06:46.369927   10012 api_server.go:131] duration metric: took 12.8544ms to wait for apiserver health ...
	I0524 20:06:46.369927   10012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 20:06:46.511715   10012 system_pods.go:59] 6 kube-system pods found
	I0524 20:06:46.511715   10012 system_pods.go:61] "coredns-5d78c9869d-ngwxf" [5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "etcd-pause-893100" [042c47b3-76c5-49a8-be92-2eece9ec9522] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-apiserver-pause-893100" [22d4a079-779f-458c-b323-4c7f578ddf80] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-controller-manager-pause-893100" [01772675-fb9c-4142-ac0d-984ba9d4c05f] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-proxy-c5vrt" [4372194d-1a11-4f50-97a2-a9b8863e1d2e] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-scheduler-pause-893100" [e18658f2-46b9-4808-a66b-0b99af639027] Running
	I0524 20:06:46.511787   10012 system_pods.go:74] duration metric: took 141.8598ms to wait for pod list to return data ...
	I0524 20:06:46.511787   10012 default_sa.go:34] waiting for default service account to be created ...
	I0524 20:06:46.711419   10012 default_sa.go:45] found service account: "default"
	I0524 20:06:46.711419   10012 default_sa.go:55] duration metric: took 199.6318ms for default service account to be created ...
	I0524 20:06:46.711941   10012 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 20:06:46.921968   10012 system_pods.go:86] 6 kube-system pods found
	I0524 20:06:46.921968   10012 system_pods.go:89] "coredns-5d78c9869d-ngwxf" [5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "etcd-pause-893100" [042c47b3-76c5-49a8-be92-2eece9ec9522] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-apiserver-pause-893100" [22d4a079-779f-458c-b323-4c7f578ddf80] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-controller-manager-pause-893100" [01772675-fb9c-4142-ac0d-984ba9d4c05f] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-proxy-c5vrt" [4372194d-1a11-4f50-97a2-a9b8863e1d2e] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-scheduler-pause-893100" [e18658f2-46b9-4808-a66b-0b99af639027] Running
	I0524 20:06:46.921968   10012 system_pods.go:126] duration metric: took 210.0272ms to wait for k8s-apps to be running ...
	I0524 20:06:46.921968   10012 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 20:06:46.933969   10012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 20:06:46.963249   10012 system_svc.go:56] duration metric: took 41.2803ms WaitForService to wait for kubelet.
	I0524 20:06:46.963249   10012 kubeadm.go:581] duration metric: took 3.362693s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 20:06:46.963249   10012 node_conditions.go:102] verifying NodePressure condition ...
	I0524 20:06:47.121890   10012 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 20:06:47.121890   10012 node_conditions.go:123] node cpu capacity is 2
	I0524 20:06:47.122534   10012 node_conditions.go:105] duration metric: took 159.2853ms to run NodePressure ...
	I0524 20:06:47.122658   10012 start.go:228] waiting for startup goroutines ...
	I0524 20:06:47.122658   10012 start.go:233] waiting for cluster config update ...
	I0524 20:06:47.122658   10012 start.go:242] writing updated cluster config ...
	I0524 20:06:47.134855   10012 ssh_runner.go:195] Run: rm -f paused
	I0524 20:06:47.353227   10012 start.go:568] kubectl: 1.18.2, cluster: 1.27.2 (minor skew: 9)
	I0524 20:06:47.356345   10012 out.go:177] 
	W0524 20:06:47.359179   10012 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 20:06:47.364178   10012 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 20:06:47.367185   10012 out.go:177] * Done! kubectl is now configured to use "pause-893100" cluster and "default" namespace by default
	I0524 20:06:45.417115    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:45.417216    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:45.417326    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\disk.vhd' -SizeBytes 20000MB
	I0524 20:06:46.675013    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:46.675013    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:46.675013    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM force-systemd-flag-052200 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0524 20:06:49.019653    7000 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	force-systemd-flag-052200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0524 20:06:49.019805    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:49.019805    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName force-systemd-flag-052200 -DynamicMemoryEnabled $false
	I0524 20:06:49.991249    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:49.991249    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:49.991249    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-flag-052200 -Count 2
	I0524 20:06:53.489976   10868 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.2134033s)
	I0524 20:06:53.492790   10868 out.go:177] 
	W0524 20:06:53.495379   10868 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0524 20:06:53.495379   10868 out.go:239] * 
	W0524 20:06:53.497289   10868 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 20:06:53.499747   10868 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 20:02:08 UTC, ends at Wed 2023-05-24 20:06:57 UTC. --
	May 24 20:06:26 pause-893100 dockerd[6221]: time="2023-05-24T20:06:26.213514084Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 20:06:26 pause-893100 cri-dockerd[6807]: W0524 20:06:26.394085    6807 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	May 24 20:06:29 pause-893100 cri-dockerd[6807]: time="2023-05-24T20:06:29Z" level=error msg="Failed to retrieve checkpoint for sandbox 81a385409af85a997db4aaacc80ab35e009a7d677fc84f26a5cfba52b9eabf1a: checkpoint is not found"
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.378231948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.378504243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.378547743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.378565442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.411302259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.412410839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.412515037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.412609236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:34 pause-893100 cri-dockerd[6807]: time="2023-05-24T20:06:34Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 24 20:06:35 pause-893100 dockerd[6221]: time="2023-05-24T20:06:35.986241824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:35 pause-893100 dockerd[6221]: time="2023-05-24T20:06:35.986774115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:35 pause-893100 dockerd[6221]: time="2023-05-24T20:06:35.987020312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:35 pause-893100 dockerd[6221]: time="2023-05-24T20:06:35.987142510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:36 pause-893100 dockerd[6221]: time="2023-05-24T20:06:36.152748208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:36 pause-893100 dockerd[6221]: time="2023-05-24T20:06:36.153251500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:36 pause-893100 dockerd[6221]: time="2023-05-24T20:06:36.154403682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:36 pause-893100 dockerd[6221]: time="2023-05-24T20:06:36.154803676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:37 pause-893100 cri-dockerd[6807]: time="2023-05-24T20:06:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4552fc2992751f81cdc3c57cde81cde90bedb5779501d12180b8f47e264dcc73/resolv.conf as [nameserver 172.27.128.1]"
	May 24 20:06:38 pause-893100 dockerd[6221]: time="2023-05-24T20:06:38.263573924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:38 pause-893100 dockerd[6221]: time="2023-05-24T20:06:38.263947518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:38 pause-893100 dockerd[6221]: time="2023-05-24T20:06:38.264155815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:38 pause-893100 dockerd[6221]: time="2023-05-24T20:06:38.264289413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	e5a274f56a9e2       ead0a4a53df89       19 seconds ago      Running             coredns                   2                   4552fc2992751
	1fd1529e00654       b8aa50768fd67       22 seconds ago      Running             kube-proxy                2                   f4cbf7412a21e
	e66fc6f6ecd48       86b6af7dd652c       27 seconds ago      Running             etcd                      2                   55ff062f84dac
	5f0b4eeb6d53e       ac2b7465ebba9       27 seconds ago      Running             kube-controller-manager   2                   7c7f06536601c
	ef570771b6adc       c5b13e4f7806d       34 seconds ago      Running             kube-apiserver            2                   0992a99facc03
	8100a9e6f30a9       89e70da428d29       34 seconds ago      Running             kube-scheduler            2                   9a69e81f88970
	ebdf7873fb71a       ac2b7465ebba9       36 seconds ago      Created             kube-controller-manager   1                   ce5ccdf7db908
	e32e5e876d0b4       b8aa50768fd67       40 seconds ago      Exited              kube-proxy                1                   2fdde93d5dbe1
	f47bd4ea62c00       86b6af7dd652c       44 seconds ago      Exited              etcd                      1                   942ff1fce0b3f
	f729780e25038       ead0a4a53df89       46 seconds ago      Exited              coredns                   1                   4c6a62dd6d30f
	cdc9a8b351539       c5b13e4f7806d       53 seconds ago      Created             kube-apiserver            1                   cb84e1827ba60
	d4b4d742aac0e       89e70da428d29       57 seconds ago      Exited              kube-scheduler            1                   081fe0ce41891
	
	* 
	* ==> coredns [e5a274f56a9e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = e68c1f66d66a8b21178767f77ec9bbf4538be12549e49c63ad565269f31e317fbc64a6eb8980e12bd093747c3f544a0bc7c04266dffb836ae54229446b5ea471
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55954 - 36367 "HINFO IN 3852336394710323909.3816590159474463866. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.061967461s
	
	* 
	* ==> coredns [f729780e2503] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e68c1f66d66a8b21178767f77ec9bbf4538be12549e49c63ad565269f31e317fbc64a6eb8980e12bd093747c3f544a0bc7c04266dffb836ae54229446b5ea471
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36600 - 9188 "HINFO IN 3567950089693108609.1612163727084401078. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066989351s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-893100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-893100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=pause-893100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T20_03_31_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 20:03:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-893100
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 20:06:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 20:06:34 +0000   Wed, 24 May 2023 20:03:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 20:06:34 +0000   Wed, 24 May 2023 20:03:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 20:06:34 +0000   Wed, 24 May 2023 20:03:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 20:06:34 +0000   Wed, 24 May 2023 20:03:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.136.175
	  Hostname:    pause-893100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	System Info:
	  Machine ID:                 471d3a12d2fe4889a2e10df10898b515
	  System UUID:                bbf3149d-6008-ce47-9412-ff63c665df4c
	  Boot ID:                    faf33dd5-7445-44d1-b73c-9650c37d87a8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-ngwxf                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m12s
	  kube-system                 etcd-pause-893100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-apiserver-pause-893100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-controller-manager-pause-893100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kube-proxy-c5vrt                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m13s
	  kube-system                 kube-scheduler-pause-893100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m11s                  kube-proxy       
	  Normal  Starting                 21s                    kube-proxy       
	  Normal  Starting                 3m42s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m42s (x8 over 3m42s)  kubelet          Node pause-893100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m42s (x8 over 3m42s)  kubelet          Node pause-893100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m42s (x7 over 3m42s)  kubelet          Node pause-893100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     3m27s                  kubelet          Node pause-893100 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  3m27s                  kubelet          Node pause-893100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m27s                  kubelet          Node pause-893100 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  3m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m27s                  kubelet          Starting kubelet.
	  Normal  NodeReady                3m21s                  kubelet          Node pause-893100 status is now: NodeReady
	  Normal  RegisteredNode           3m15s                  node-controller  Node pause-893100 event: Registered Node pause-893100 in Controller
	  Normal  Starting                 28s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)      kubelet          Node pause-893100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)      kubelet          Node pause-893100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x7 over 28s)      kubelet          Node pause-893100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10s                    node-controller  Node pause-893100 event: Registered Node pause-893100 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.756962] systemd-fstab-generator[1069]: Ignoring "noauto" for root device
	[  +0.631769] systemd-fstab-generator[1107]: Ignoring "noauto" for root device
	[  +0.166505] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +0.197705] systemd-fstab-generator[1131]: Ignoring "noauto" for root device
	[  +1.859023] systemd-fstab-generator[1278]: Ignoring "noauto" for root device
	[  +0.184859] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[  +0.206936] systemd-fstab-generator[1300]: Ignoring "noauto" for root device
	[  +0.178068] systemd-fstab-generator[1311]: Ignoring "noauto" for root device
	[  +0.248119] systemd-fstab-generator[1325]: Ignoring "noauto" for root device
	[  +8.468329] systemd-fstab-generator[1585]: Ignoring "noauto" for root device
	[  +1.018259] kauditd_printk_skb: 68 callbacks suppressed
	[ +14.412599] systemd-fstab-generator[2759]: Ignoring "noauto" for root device
	[ +24.906887] kauditd_printk_skb: 30 callbacks suppressed
	[May24 20:05] systemd-fstab-generator[5586]: Ignoring "noauto" for root device
	[  +0.619768] systemd-fstab-generator[5621]: Ignoring "noauto" for root device
	[  +0.287909] systemd-fstab-generator[5632]: Ignoring "noauto" for root device
	[  +0.292147] systemd-fstab-generator[5645]: Ignoring "noauto" for root device
	[May24 20:06] systemd-fstab-generator[6526]: Ignoring "noauto" for root device
	[  +0.246390] systemd-fstab-generator[6588]: Ignoring "noauto" for root device
	[  +0.273622] systemd-fstab-generator[6607]: Ignoring "noauto" for root device
	[  +0.258422] systemd-fstab-generator[6682]: Ignoring "noauto" for root device
	[  +0.397514] systemd-fstab-generator[6751]: Ignoring "noauto" for root device
	[  +2.062841] kauditd_printk_skb: 34 callbacks suppressed
	[ +15.872825] kauditd_printk_skb: 11 callbacks suppressed
	[  +2.667936] systemd-fstab-generator[8335]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [e66fc6f6ecd4] <==
	* {"level":"info","ts":"2023-05-24T20:06:31.093Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-24T20:06:31.093Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.27.136.175:2380"}
	{"level":"info","ts":"2023-05-24T20:06:31.093Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.27.136.175:2380"}
	{"level":"info","ts":"2023-05-24T20:06:31.092Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-05-24T20:06:31.093Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-24T20:06:31.095Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-24T20:06:31.095Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-05-24T20:06:31.095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 switched to configuration voters=(13066651530777212280)"}
	{"level":"info","ts":"2023-05-24T20:06:31.095Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b3c6091156c933b8","local-member-id":"b55612564fca3578","added-peer-id":"b55612564fca3578","added-peer-peer-urls":["https://172.27.136.175:2380"]}
	{"level":"info","ts":"2023-05-24T20:06:31.095Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b3c6091156c933b8","local-member-id":"b55612564fca3578","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T20:06:31.095Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T20:06:32.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 is starting a new election at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:32.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:32.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 received MsgPreVoteResp from b55612564fca3578 at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:32.243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 became candidate at term 4"}
	{"level":"info","ts":"2023-05-24T20:06:32.244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 received MsgVoteResp from b55612564fca3578 at term 4"}
	{"level":"info","ts":"2023-05-24T20:06:32.244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 became leader at term 4"}
	{"level":"info","ts":"2023-05-24T20:06:32.244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b55612564fca3578 elected leader b55612564fca3578 at term 4"}
	{"level":"info","ts":"2023-05-24T20:06:32.255Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b55612564fca3578","local-member-attributes":"{Name:pause-893100 ClientURLs:[https://172.27.136.175:2379]}","request-path":"/0/members/b55612564fca3578/attributes","cluster-id":"b3c6091156c933b8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T20:06:32.255Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T20:06:32.255Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T20:06:32.256Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T20:06:32.257Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.27.136.175:2379"}
	{"level":"info","ts":"2023-05-24T20:06:32.257Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T20:06:32.257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [f47bd4ea62c0] <==
	* {"level":"info","ts":"2023-05-24T20:06:14.524Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T20:06:14.524Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b55612564fca3578","initial-advertise-peer-urls":["https://172.27.136.175:2380"],"listen-peer-urls":["https://172.27.136.175:2380"],"advertise-client-urls":["https://172.27.136.175:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.136.175:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-24T20:06:14.524Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-24T20:06:14.525Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.27.136.175:2380"}
	{"level":"info","ts":"2023-05-24T20:06:14.525Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.27.136.175:2380"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 is starting a new election at term 2"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 received MsgPreVoteResp from b55612564fca3578 at term 2"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 became candidate at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 received MsgVoteResp from b55612564fca3578 at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 became leader at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b55612564fca3578 elected leader b55612564fca3578 at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:15.602Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b55612564fca3578","local-member-attributes":"{Name:pause-893100 ClientURLs:[https://172.27.136.175:2379]}","request-path":"/0/members/b55612564fca3578/attributes","cluster-id":"b3c6091156c933b8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T20:06:15.602Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T20:06:15.602Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T20:06:15.605Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.27.136.175:2379"}
	{"level":"info","ts":"2023-05-24T20:06:15.605Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T20:06:15.605Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T20:06:15.605Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T20:06:21.145Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-05-24T20:06:21.145Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-893100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.27.136.175:2380"],"advertise-client-urls":["https://172.27.136.175:2379"]}
	{"level":"info","ts":"2023-05-24T20:06:21.149Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b55612564fca3578","current-leader-member-id":"b55612564fca3578"}
	{"level":"info","ts":"2023-05-24T20:06:21.159Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"172.27.136.175:2380"}
	{"level":"info","ts":"2023-05-24T20:06:21.160Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"172.27.136.175:2380"}
	{"level":"info","ts":"2023-05-24T20:06:21.160Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-893100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.27.136.175:2380"],"advertise-client-urls":["https://172.27.136.175:2379"]}
	
	* 
	* ==> kernel <==
	*  20:06:58 up 4 min,  0 users,  load average: 2.04, 1.02, 0.42
	Linux pause-893100 5.10.57 #1 SMP Sat May 20 03:22:25 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [cdc9a8b35153] <==
	* 
	* 
	* ==> kube-apiserver [ef570771b6ad] <==
	* I0524 20:06:34.597555       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0524 20:06:34.597770       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0524 20:06:34.598131       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0524 20:06:34.658848       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0524 20:06:34.659049       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	E0524 20:06:34.869546       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0524 20:06:34.877164       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0524 20:06:34.877414       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0524 20:06:34.878876       1 shared_informer.go:318] Caches are synced for configmaps
	I0524 20:06:34.883994       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0524 20:06:34.884328       1 cache.go:39] Caches are synced for autoregister controller
	I0524 20:06:34.886275       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0524 20:06:34.886559       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0524 20:06:34.896878       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0524 20:06:34.920050       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0524 20:06:34.956049       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0524 20:06:35.001773       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0524 20:06:35.641499       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0524 20:06:37.342019       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0524 20:06:37.478342       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0524 20:06:37.691829       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0524 20:06:37.819551       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0524 20:06:37.842520       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0524 20:06:47.941198       1 controller.go:624] quota admission added evaluator for: endpoints
	I0524 20:06:47.960587       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [5f0b4eeb6d53] <==
	* I0524 20:06:47.948096       1 shared_informer.go:318] Caches are synced for stateful set
	I0524 20:06:47.952440       1 shared_informer.go:318] Caches are synced for deployment
	I0524 20:06:47.953637       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0524 20:06:47.954336       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0524 20:06:47.956827       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0524 20:06:47.965952       1 shared_informer.go:318] Caches are synced for attach detach
	I0524 20:06:47.969974       1 shared_informer.go:318] Caches are synced for cronjob
	I0524 20:06:47.976196       1 shared_informer.go:318] Caches are synced for daemon sets
	I0524 20:06:47.983149       1 shared_informer.go:318] Caches are synced for service account
	I0524 20:06:47.983637       1 shared_informer.go:318] Caches are synced for crt configmap
	I0524 20:06:47.990899       1 shared_informer.go:318] Caches are synced for HPA
	I0524 20:06:47.997887       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0524 20:06:47.998898       1 shared_informer.go:318] Caches are synced for taint
	I0524 20:06:47.999423       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0524 20:06:48.000634       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0524 20:06:48.001058       1 event.go:307] "Event occurred" object="pause-893100" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-893100 event: Registered Node pause-893100 in Controller"
	I0524 20:06:48.001252       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0524 20:06:48.001643       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-893100"
	I0524 20:06:48.002990       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0524 20:06:48.002866       1 taint_manager.go:211] "Sending events to api server"
	I0524 20:06:48.044488       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 20:06:48.114175       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 20:06:48.443393       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 20:06:48.443521       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0524 20:06:48.507641       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [ebdf7873fb71] <==
	* 
	* 
	* ==> kube-proxy [1fd1529e0065] <==
	* I0524 20:06:36.562799       1 node.go:141] Successfully retrieved node IP: 172.27.136.175
	I0524 20:06:36.563216       1 server_others.go:110] "Detected node IP" address="172.27.136.175"
	I0524 20:06:36.563274       1 server_others.go:551] "Using iptables proxy"
	I0524 20:06:36.662970       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 20:06:36.663015       1 server_others.go:190] "Using iptables Proxier"
	I0524 20:06:36.667925       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 20:06:36.668648       1 server.go:657] "Version info" version="v1.27.2"
	I0524 20:06:36.668997       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 20:06:36.670265       1 config.go:188] "Starting service config controller"
	I0524 20:06:36.670290       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 20:06:36.670328       1 config.go:97] "Starting endpoint slice config controller"
	I0524 20:06:36.670335       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 20:06:36.674390       1 config.go:315] "Starting node config controller"
	I0524 20:06:36.674562       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 20:06:36.771085       1 shared_informer.go:318] Caches are synced for service config
	I0524 20:06:36.771149       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0524 20:06:36.775259       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [e32e5e876d0b] <==
	* E0524 20:06:18.003611       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-893100": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:19.140396       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-893100": dial tcp 172.27.136.175:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [8100a9e6f30a] <==
	* W0524 20:06:34.774254       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0524 20:06:34.775842       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0524 20:06:34.774724       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 20:06:34.780426       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 20:06:34.775019       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0524 20:06:34.781027       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0524 20:06:34.781288       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 20:06:34.784201       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 20:06:34.775356       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0524 20:06:34.775442       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0524 20:06:34.775582       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 20:06:34.775728       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0524 20:06:34.775765       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0524 20:06:34.775817       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 20:06:34.780361       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 20:06:34.775238       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 20:06:34.786100       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0524 20:06:34.786130       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 20:06:34.786228       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0524 20:06:34.786431       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0524 20:06:34.786455       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 20:06:34.786469       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 20:06:34.786801       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 20:06:34.787033       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0524 20:06:36.441574       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [d4b4d742aac0] <==
	* W0524 20:06:05.363879       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://172.27.136.175:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.364088       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.27.136.175:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.426924       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://172.27.136.175:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.427109       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.27.136.175:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.557879       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://172.27.136.175:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.557927       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://172.27.136.175:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.562781       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://172.27.136.175:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.562831       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://172.27.136.175:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.635809       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.27.136.175:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.635852       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.27.136.175:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.649969       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://172.27.136.175:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.650011       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://172.27.136.175:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.677115       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://172.27.136.175:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.677305       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://172.27.136.175:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.758963       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://172.27.136.175:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.759110       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://172.27.136.175:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.811868       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.27.136.175:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.812552       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.27.136.175:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.832382       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://172.27.136.175:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.832431       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://172.27.136.175:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.834905       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://172.27.136.175:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.834938       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://172.27.136.175:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	I0524 20:06:06.096040       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0524 20:06:06.096136       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0524 20:06:06.096258       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 20:02:08 UTC, ends at Wed 2023-05-24 20:06:58 UTC. --
	May 24 20:06:29 pause-893100 kubelet[8350]: I0524 20:06:29.853569    8350 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9880fb68a45b83957ff4579d73d2112a3f95ee5840f2776e0ad1108e79bbc55"
	May 24 20:06:29 pause-893100 kubelet[8350]: I0524 20:06:29.913511    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3705417d0b6bda4e87fe0a1802e2b07c-etcd-data\") pod \"etcd-pause-893100\" (UID: \"3705417d0b6bda4e87fe0a1802e2b07c\") " pod="kube-system/etcd-pause-893100"
	May 24 20:06:29 pause-893100 kubelet[8350]: I0524 20:06:29.913595    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abc26da9d5f336247c64c38451a14d81-ca-certs\") pod \"kube-apiserver-pause-893100\" (UID: \"abc26da9d5f336247c64c38451a14d81\") " pod="kube-system/kube-apiserver-pause-893100"
	May 24 20:06:29 pause-893100 kubelet[8350]: I0524 20:06:29.913627    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abc26da9d5f336247c64c38451a14d81-k8s-certs\") pod \"kube-apiserver-pause-893100\" (UID: \"abc26da9d5f336247c64c38451a14d81\") " pod="kube-system/kube-apiserver-pause-893100"
	May 24 20:06:29 pause-893100 kubelet[8350]: I0524 20:06:29.913770    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3705417d0b6bda4e87fe0a1802e2b07c-etcd-certs\") pod \"etcd-pause-893100\" (UID: \"3705417d0b6bda4e87fe0a1802e2b07c\") " pod="kube-system/etcd-pause-893100"
	May 24 20:06:29 pause-893100 kubelet[8350]: I0524 20:06:29.913921    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abc26da9d5f336247c64c38451a14d81-usr-share-ca-certificates\") pod \"kube-apiserver-pause-893100\" (UID: \"abc26da9d5f336247c64c38451a14d81\") " pod="kube-system/kube-apiserver-pause-893100"
	May 24 20:06:30 pause-893100 kubelet[8350]: I0524 20:06:30.111277    8350 scope.go:115] "RemoveContainer" containerID="ebdf7873fb71aff4f7c65dc81071922f15ffbc8270ff6440ff5d698c81e290da"
	May 24 20:06:30 pause-893100 kubelet[8350]: I0524 20:06:30.159945    8350 scope.go:115] "RemoveContainer" containerID="f47bd4ea62c004a584ab9f2a845ca4e8f17c742f6a34641598f6d5bab2691022"
	May 24 20:06:34 pause-893100 kubelet[8350]: I0524 20:06:34.911261    8350 kubelet_node_status.go:108] "Node was previously registered" node="pause-893100"
	May 24 20:06:34 pause-893100 kubelet[8350]: I0524 20:06:34.911510    8350 kubelet_node_status.go:73] "Successfully registered node" node="pause-893100"
	May 24 20:06:34 pause-893100 kubelet[8350]: I0524 20:06:34.915851    8350 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 24 20:06:34 pause-893100 kubelet[8350]: I0524 20:06:34.917428    8350 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.367546    8350 apiserver.go:52] "Watching apiserver"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.371989    8350 topology_manager.go:212] "Topology Admit Handler"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.372368    8350 topology_manager.go:212] "Topology Admit Handler"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.403588    8350 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.484846    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64-config-volume\") pod \"coredns-5d78c9869d-ngwxf\" (UID: \"5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64\") " pod="kube-system/coredns-5d78c9869d-ngwxf"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.484965    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5jgh\" (UniqueName: \"kubernetes.io/projected/5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64-kube-api-access-b5jgh\") pod \"coredns-5d78c9869d-ngwxf\" (UID: \"5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64\") " pod="kube-system/coredns-5d78c9869d-ngwxf"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.485026    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4372194d-1a11-4f50-97a2-a9b8863e1d2e-xtables-lock\") pod \"kube-proxy-c5vrt\" (UID: \"4372194d-1a11-4f50-97a2-a9b8863e1d2e\") " pod="kube-system/kube-proxy-c5vrt"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.485175    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4372194d-1a11-4f50-97a2-a9b8863e1d2e-lib-modules\") pod \"kube-proxy-c5vrt\" (UID: \"4372194d-1a11-4f50-97a2-a9b8863e1d2e\") " pod="kube-system/kube-proxy-c5vrt"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.485251    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4372194d-1a11-4f50-97a2-a9b8863e1d2e-kube-proxy\") pod \"kube-proxy-c5vrt\" (UID: \"4372194d-1a11-4f50-97a2-a9b8863e1d2e\") " pod="kube-system/kube-proxy-c5vrt"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.485337    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z52zp\" (UniqueName: \"kubernetes.io/projected/4372194d-1a11-4f50-97a2-a9b8863e1d2e-kube-api-access-z52zp\") pod \"kube-proxy-c5vrt\" (UID: \"4372194d-1a11-4f50-97a2-a9b8863e1d2e\") " pod="kube-system/kube-proxy-c5vrt"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.485385    8350 reconciler.go:41] "Reconciler: start to sync state"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.676263    8350 scope.go:115] "RemoveContainer" containerID="e32e5e876d0b41955df703aac3179f7a3b7e88f1a123d54202e89af358a31ba4"
	May 24 20:06:37 pause-893100 kubelet[8350]: I0524 20:06:37.935316    8350 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4552fc2992751f81cdc3c57cde81cde90bedb5779501d12180b8f47e264dcc73"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-893100 -n pause-893100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-893100 -n pause-893100: (6.0140494s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-893100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-893100 -n pause-893100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-893100 -n pause-893100: (5.663374s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-893100 logs -n 25
E0524 20:07:16.684135    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-893100 logs -n 25: (19.5225816s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| start   | -p multinode-237000-m03        | multinode-237000-m03      | minikube1\jenkins | v1.30.1 | 24 May 23 19:48 UTC | 24 May 23 19:50 UTC |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| node    | add -p multinode-237000        | multinode-237000          | minikube1\jenkins | v1.30.1 | 24 May 23 19:50 UTC |                     |
	| delete  | -p multinode-237000-m03        | multinode-237000-m03      | minikube1\jenkins | v1.30.1 | 24 May 23 19:50 UTC | 24 May 23 19:50 UTC |
	| delete  | -p multinode-237000            | multinode-237000          | minikube1\jenkins | v1.30.1 | 24 May 23 19:51 UTC | 24 May 23 19:51 UTC |
	| start   | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:51 UTC | 24 May 23 19:55 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr              |                           |                   |         |                     |                     |
	|         | --wait=true --preload=false    |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                           |                   |         |                     |                     |
	| ssh     | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:55 UTC | 24 May 23 19:55 UTC |
	|         | -- docker pull                 |                           |                   |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox    |                           |                   |         |                     |                     |
	| stop    | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:55 UTC | 24 May 23 19:55 UTC |
	| start   | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:55 UTC | 24 May 23 19:57 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --wait=true --driver=hyperv    |                           |                   |         |                     |                     |
	| ssh     | -p test-preload-134100 --      | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:57 UTC | 24 May 23 19:57 UTC |
	|         | docker images                  |                           |                   |         |                     |                     |
	| delete  | -p test-preload-134100         | test-preload-134100       | minikube1\jenkins | v1.30.1 | 24 May 23 19:57 UTC | 24 May 23 19:57 UTC |
	| start   | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:57 UTC | 24 May 23 19:59 UTC |
	|         | --memory=2048 --driver=hyperv  |                           |                   |         |                     |                     |
	| stop    | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:59 UTC | 24 May 23 19:59 UTC |
	|         | --schedule 5m                  |                           |                   |         |                     |                     |
	| ssh     | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:59 UTC | 24 May 23 19:59 UTC |
	|         | -- sudo systemctl show         |                           |                   |         |                     |                     |
	|         | minikube-scheduled-stop        |                           |                   |         |                     |                     |
	|         | --no-page                      |                           |                   |         |                     |                     |
	| stop    | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 19:59 UTC | 24 May 23 20:00 UTC |
	|         | --schedule 5s                  |                           |                   |         |                     |                     |
	| delete  | -p scheduled-stop-174400       | scheduled-stop-174400     | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC | 24 May 23 20:01 UTC |
	| start   | -p offline-docker-893100       | offline-docker-893100     | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC | 24 May 23 20:05 UTC |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --memory=2048 --wait=true      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p pause-893100 --memory=2048  | pause-893100              | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC | 24 May 23 20:03 UTC |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv     |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-893100         | NoKubernetes-893100       | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC |                     |
	|         | --no-kubernetes                |                           |                   |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-893100         | NoKubernetes-893100       | minikube1\jenkins | v1.30.1 | 24 May 23 20:01 UTC |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p pause-893100                | pause-893100              | minikube1\jenkins | v1.30.1 | 24 May 23 20:03 UTC | 24 May 23 20:06 UTC |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-893100      | running-upgrade-893100    | minikube1\jenkins | v1.30.1 | 24 May 23 20:04 UTC |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p offline-docker-893100       | offline-docker-893100     | minikube1\jenkins | v1.30.1 | 24 May 23 20:05 UTC | 24 May 23 20:06 UTC |
	| start   | -p force-systemd-flag-052200   | force-systemd-flag-052200 | minikube1\jenkins | v1.30.1 | 24 May 23 20:06 UTC |                     |
	|         | --memory=2048 --force-systemd  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p NoKubernetes-893100         | NoKubernetes-893100       | minikube1\jenkins | v1.30.1 | 24 May 23 20:06 UTC |                     |
	| delete  | -p running-upgrade-893100      | running-upgrade-893100    | minikube1\jenkins | v1.30.1 | 24 May 23 20:06 UTC |                     |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 20:06:00
	Running on machine: minikube1
	Binary: Built with gc go1.20.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 20:06:00.194871    7000 out.go:296] Setting OutFile to fd 1632 ...
	I0524 20:06:00.277864    7000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:06:00.277864    7000 out.go:309] Setting ErrFile to fd 1636...
	I0524 20:06:00.277864    7000 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:06:00.301877    7000 out.go:303] Setting JSON to false
	I0524 20:06:00.305881    7000 start.go:125] hostinfo: {"hostname":"minikube1","uptime":7273,"bootTime":1684951486,"procs":160,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 20:06:00.305881    7000 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 20:06:00.310903    7000 out.go:177] * [force-systemd-flag-052200] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 20:06:00.313898    7000 notify.go:220] Checking for updates...
	I0524 20:06:00.315886    7000 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:06:00.318874    7000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 20:06:00.322896    7000 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 20:06:00.328610    7000 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 20:06:00.332277    7000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 20:05:59.168692    4556 ssh_runner.go:235] Completed: sudo systemctl restart docker: (19.7086593s)
	I0524 20:05:59.168692    4556 start.go:481] detecting cgroup driver to use...
	I0524 20:05:59.168692    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:05:59.217719    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0524 20:05:59.258739    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 20:05:59.280729    4556 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 20:05:59.289804    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 20:05:59.323853    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:05:59.359818    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 20:05:59.391294    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:05:59.422332    4556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 20:05:59.453292    4556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 20:05:59.487316    4556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 20:05:59.514274    4556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 20:05:59.541947    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:05:59.734524    4556 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 20:05:59.766734    4556 start.go:481] detecting cgroup driver to use...
	I0524 20:05:59.780482    4556 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 20:05:59.812886    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:05:59.846891    4556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 20:05:59.880996    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:05:59.915545    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:05:59.951453    4556 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0524 20:06:00.025439    4556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:06:00.053617    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:06:00.125621    4556 ssh_runner.go:195] Run: which cri-dockerd
	I0524 20:06:00.150623    4556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 20:06:00.170069    4556 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 20:06:00.228863    4556 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 20:06:00.484236    4556 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 20:06:00.699121    4556 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 20:06:00.699121    4556 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 20:06:00.749088    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:00.936425    4556 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 20:06:00.337717    7000 config.go:182] Loaded profile config "NoKubernetes-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:00.339401    7000 config.go:182] Loaded profile config "pause-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:00.340198    7000 config.go:182] Loaded profile config "running-upgrade-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0524 20:06:00.340198    7000 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 20:06:02.236727    7000 out.go:177] * Using the hyperv driver based on user configuration
	I0524 20:06:02.241752    7000 start.go:295] selected driver: hyperv
	I0524 20:06:02.241752    7000 start.go:870] validating driver "hyperv" against <nil>
	I0524 20:06:02.241752    7000 start.go:881] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 20:06:02.308741    7000 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 20:06:02.309722    7000 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 20:06:02.309722    7000 cni.go:84] Creating CNI manager for ""
	I0524 20:06:02.309722    7000 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:02.309722    7000 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0524 20:06:02.309722    7000 start_flags.go:319] config:
	{Name:force-systemd-flag-052200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-052200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:06:02.310730    7000 iso.go:125] acquiring lock: {Name:mk3b29db369ab0f922ac5eeb788beee87e18ec94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:06:02.314735    7000 out.go:177] * Starting control plane node force-systemd-flag-052200 in cluster force-systemd-flag-052200
	I0524 20:06:02.953206    4556 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.0167823s)
	I0524 20:06:02.963444    4556 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:03.157701    4556 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 20:06:03.353266    4556 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:03.569588    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:03.748952    4556 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 20:06:03.798296    4556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:03.976157    4556 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 20:06:04.098802    4556 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 20:06:04.111837    4556 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 20:06:04.121845    4556 start.go:549] Will wait 60s for crictl version
	I0524 20:06:04.130849    4556 ssh_runner.go:195] Run: which crictl
	I0524 20:06:04.150684    4556 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 20:06:04.220851    4556 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 20:06:04.229226    4556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:04.282631    4556 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:01.368780   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:02.233377   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:02.233557   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:02.233557   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:03.452998   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:03.452998   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:03.452998   10868 provision.go:138] copyHostCerts
	I0524 20:06:03.452998   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 20:06:03.453986   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 20:06:03.453986   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 20:06:03.455998   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 20:06:03.455998   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 20:06:03.455998   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 20:06:03.456992   10868 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 20:06:03.456992   10868 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 20:06:03.457996   10868 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 20:06:03.459004   10868 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-893100 san=[172.27.134.82 172.27.134.82 localhost 127.0.0.1 minikube running-upgrade-893100]
	I0524 20:06:03.728326   10868 provision.go:172] copyRemoteCerts
	I0524 20:06:03.737398   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 20:06:03.737398   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:04.586035   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:04.586035   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:04.586128   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:02.321733    7000 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:06:02.321733    7000 preload.go:148] Found local preload: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0524 20:06:02.321733    7000 cache.go:57] Caching tarball of preloaded images
	I0524 20:06:02.322748    7000 preload.go:174] Found C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0524 20:06:02.322748    7000 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0524 20:06:02.322748    7000 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-052200\config.json ...
	I0524 20:06:02.322748    7000 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\force-systemd-flag-052200\config.json: {Name:mka0a0923dabc11ea4915f2cdd814ce71e98be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:02.324750    7000 cache.go:195] Successfully downloaded all kic artifacts
	I0524 20:06:02.324750    7000 start.go:364] acquiring machines lock for force-systemd-flag-052200: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 20:06:04.337415    4556 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 20:06:04.337943    4556 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 20:06:04.342740    4556 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 20:06:04.345961    4556 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 20:06:04.345961    4556 ip.go:210] interface addr: 172.27.128.1/20
	I0524 20:06:04.355071    4556 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 20:06:04.361757    4556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.27.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 20:06:04.383636    4556 localpath.go:92] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\client.crt -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\client.crt
	I0524 20:06:04.385029    4556 localpath.go:117] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\client.key -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\client.key
	I0524 20:06:04.386571    4556 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:06:04.392558    4556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:04.429414    4556 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:04.429414    4556 docker.go:563] Images already preloaded, skipping extraction
	I0524 20:06:04.436477    4556 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:04.476086    4556 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:04.476086    4556 cache_images.go:84] Images are preloaded, skipping loading
	I0524 20:06:04.482629    4556 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 20:06:04.532626    4556 cni.go:84] Creating CNI manager for ""
	I0524 20:06:04.532626    4556 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:04.532626    4556 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 20:06:04.532626    4556 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.134.18 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:NoKubernetes-893100 NodeName:NoKubernetes-893100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.134.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.134.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 20:06:04.532626    4556 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.134.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "NoKubernetes-893100"
	  kubeletExtraArgs:
	    node-ip: 172.27.134.18
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.134.18"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 20:06:04.532626    4556 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=NoKubernetes-893100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.134.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:NoKubernetes-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 20:06:04.541620    4556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 20:06:04.568761    4556 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 20:06:04.581711    4556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 20:06:04.605115    4556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0524 20:06:04.638932    4556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 20:06:04.671524    4556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0524 20:06:04.718006    4556 ssh_runner.go:195] Run: grep 172.27.134.18	control-plane.minikube.internal$ /etc/hosts
	I0524 20:06:04.724023    4556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.27.134.18	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0524 20:06:04.746376    4556 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100 for IP: 172.27.134.18
	I0524 20:06:04.746462    4556 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:04.747226    4556 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 20:06:04.747373    4556 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 20:06:04.748219    4556 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\client.key
	I0524 20:06:04.748219    4556 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56
	I0524 20:06:04.748755    4556 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56 with IP's: [172.27.134.18 10.96.0.1 127.0.0.1 10.0.0.1]
	I0524 20:06:04.971535    4556 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56 ...
	I0524 20:06:04.972539    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56: {Name:mk3560aeed00029897190182186ed8cda7ba9211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:04.973603    4556 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56 ...
	I0524 20:06:04.973603    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56: {Name:mk0dcda055aab9733580bdf04f9905181c59f6fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:04.974581    4556 certs.go:337] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt.65e2ae56 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt
	I0524 20:06:04.986543    4556 certs.go:341] copying C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key.65e2ae56 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key
	I0524 20:06:04.987543    4556 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key
	I0524 20:06:04.987543    4556 crypto.go:68] Generating cert C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt with IP's: []
	I0524 20:06:05.209022    4556 crypto.go:156] Writing cert to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt ...
	I0524 20:06:05.209022    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt: {Name:mk855573f394b139659b125b2169fcb2c42c1cda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:05.210021    4556 crypto.go:164] Writing key to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key ...
	I0524 20:06:05.210021    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key: {Name:mkf5e64627dd020f5c501fa0f12c3043f4dd0c20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 20:06:05.222128    4556 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 20:06:05.222128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 20:06:05.223128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 20:06:05.223128    4556 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 20:06:05.224875    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 20:06:05.272003    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 20:06:05.319493    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 20:06:05.370453    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\NoKubernetes-893100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 20:06:05.417021    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 20:06:05.457401    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 20:06:05.500182    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 20:06:05.544310    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 20:06:05.591935    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 20:06:05.638549    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 20:06:05.680340    4556 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 20:06:05.726205    4556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 20:06:05.767423    4556 ssh_runner.go:195] Run: openssl version
	I0524 20:06:05.784749    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 20:06:05.813869    4556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:05.821773    4556 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:05.833861    4556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:05.850853    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 20:06:05.878304    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 20:06:05.906297    4556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 20:06:05.914309    4556 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 20:06:05.925710    4556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 20:06:05.943582    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 20:06:05.975648    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 20:06:06.015170    4556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 20:06:06.023978    4556 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 20:06:06.035850    4556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 20:06:06.056997    4556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 20:06:06.094832    4556 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 20:06:06.103406    4556 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0524 20:06:06.104848    4556 kubeadm.go:404] StartCluster: {Name:NoKubernetes-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.27.2 ClusterName:NoKubernetes-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.134.18 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:06:06.114867    4556 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:06.158802    4556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 20:06:06.188512    4556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:06.213818    4556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 20:06:06.237285    4556 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0524 20:06:06.237285    4556 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0524 20:06:07.025821   10012 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.9420978s)
	I0524 20:06:07.035631   10012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:07.286145   10012 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0524 20:06:07.528622   10012 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0524 20:06:07.847079   10012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:08.145384   10012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0524 20:06:08.218382   10012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:08.594082   10012 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0524 20:06:08.952592   10012 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0524 20:06:08.962588   10012 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0524 20:06:08.978277   10012 start.go:549] Will wait 60s for crictl version
	I0524 20:06:08.991301   10012 ssh_runner.go:195] Run: which crictl
	I0524 20:06:09.010872   10012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0524 20:06:09.158427   10012 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0524 20:06:09.167406   10012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:09.235215   10012 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0524 20:06:05.796410   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:05.796598   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:05.797003   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:05.917620   10868 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.1801s)
	I0524 20:06:05.918104   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 20:06:05.946671   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0524 20:06:05.972632   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0524 20:06:05.998139   10868 provision.go:86] duration metric: configureAuth took 6.861093s
	I0524 20:06:05.998139   10868 buildroot.go:189] setting minikube options for container-runtime
	I0524 20:06:05.998139   10868 config.go:182] Loaded profile config "running-upgrade-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0524 20:06:05.998139   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:06.835417   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:08.003023   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:08.003096   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:08.008201   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:08.008892   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:08.009428   10868 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 20:06:08.198102   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 20:06:08.198102   10868 buildroot.go:70] root file system type: tmpfs
	I0524 20:06:08.198398   10868 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 20:06:08.198501   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:09.022074   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:09.293679   10012 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 20.10.23 ...
	I0524 20:06:09.293679   10012 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0524 20:06:09.301055   10012 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:74:1b:be Flags:up|broadcast|multicast|running}
	I0524 20:06:09.303152   10012 ip.go:210] interface addr: fe80::2d9b:6c8:36de:16db/64
	I0524 20:06:09.304149   10012 ip.go:210] interface addr: 172.27.128.1/20
	I0524 20:06:09.316137   10012 ssh_runner.go:195] Run: grep 172.27.128.1	host.minikube.internal$ /etc/hosts
	I0524 20:06:09.324681   10012 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 20:06:09.332631   10012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:09.369439   10012 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:09.369531   10012 docker.go:563] Images already preloaded, skipping extraction
	I0524 20:06:09.378738   10012 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0524 20:06:09.416677   10012 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0524 20:06:09.416752   10012 cache_images.go:84] Images are preloaded, skipping loading
	I0524 20:06:09.424464   10012 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0524 20:06:09.475847   10012 cni.go:84] Creating CNI manager for ""
	I0524 20:06:09.475914   10012 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:09.475914   10012 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0524 20:06:09.475982   10012 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.27.136.175 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-893100 NodeName:pause-893100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.27.136.175"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.27.136.175 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0524 20:06:09.476249   10012 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.27.136.175
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-893100"
	  kubeletExtraArgs:
	    node-ip: 172.27.136.175
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.27.136.175"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0524 20:06:09.476452   10012 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-893100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.27.136.175
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:pause-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0524 20:06:09.485138   10012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0524 20:06:09.504201   10012 binaries.go:44] Found k8s binaries, skipping transfer
	I0524 20:06:09.513369   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0524 20:06:09.532371   10012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0524 20:06:09.564859   10012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0524 20:06:09.594750   10012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0524 20:06:09.633134   10012 ssh_runner.go:195] Run: grep 172.27.136.175	control-plane.minikube.internal$ /etc/hosts
	I0524 20:06:09.645572   10012 certs.go:56] Setting up C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100 for IP: 172.27.136.175
	I0524 20:06:09.645572   10012 certs.go:190] acquiring lock for shared ca certs: {Name:mk7484196a709b348d442d7deac4228c8c4b804e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:09.646787   10012 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key
	I0524 20:06:09.647146   10012 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key
	I0524 20:06:09.648111   10012 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\client.key
	I0524 20:06:09.648492   10012 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.key.85da34c2
	I0524 20:06:09.648994   10012 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.key
	I0524 20:06:09.650350   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem (1338 bytes)
	W0524 20:06:09.650774   10012 certs.go:433] ignoring C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560_empty.pem, impossibly tiny 0 bytes
	I0524 20:06:09.650774   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0524 20:06:09.651481   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0524 20:06:09.651481   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0524 20:06:09.652160   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0524 20:06:09.652338   10012 certs.go:437] found cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem (1708 bytes)
	I0524 20:06:09.654026   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0524 20:06:09.697087   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0524 20:06:09.743829   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0524 20:06:09.789241   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0524 20:06:09.832568   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0524 20:06:09.878635   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0524 20:06:09.922855   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0524 20:06:09.967076   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0524 20:06:10.011306   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0524 20:06:10.054388   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\6560.pem --> /usr/share/ca-certificates/6560.pem (1338 bytes)
	I0524 20:06:10.101750   10012 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /usr/share/ca-certificates/65602.pem (1708 bytes)
	I0524 20:06:10.152071   10012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0524 20:06:06.507025    4556 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0524 20:06:06.508025    4556 kubeadm.go:322] W0524 20:06:06.503827    1529 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 20:06:10.225770   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:10.225770   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:10.229774   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:10.229774   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:10.229774   10868 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 20:06:10.399947   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 20:06:10.399947   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:11.230707   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:12.467458   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:12.467458   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:12.471467   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:12.472454   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:12.472454   10868 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 20:06:10.211751   10012 ssh_runner.go:195] Run: openssl version
	I0524 20:06:10.229774   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0524 20:06:10.266275   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.277278   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 May 24 18:41 /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.287269   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0524 20:06:10.310865   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0524 20:06:10.340942   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6560.pem && ln -fs /usr/share/ca-certificates/6560.pem /etc/ssl/certs/6560.pem"
	I0524 20:06:10.366956   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.375774   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 May 24 18:51 /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.384953   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6560.pem
	I0524 20:06:10.405947   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6560.pem /etc/ssl/certs/51391683.0"
	I0524 20:06:10.444116   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/65602.pem && ln -fs /usr/share/ca-certificates/65602.pem /etc/ssl/certs/65602.pem"
	I0524 20:06:10.474716   10012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.482279   10012 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 May 24 18:51 /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.492728   10012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/65602.pem
	I0524 20:06:10.512758   10012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/65602.pem /etc/ssl/certs/3ec20f2e.0"
	I0524 20:06:10.583724   10012 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0524 20:06:10.605736   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0524 20:06:10.624468   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0524 20:06:10.643480   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0524 20:06:10.669632   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0524 20:06:10.689627   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0524 20:06:10.708343   10012 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0524 20:06:10.717465   10012 kubeadm.go:404] StartCluster: {Name:pause-893100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.2 ClusterName:pause-893100 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.27.136.175 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-
security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:06:10.726272   10012 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:10.777160   10012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0524 20:06:10.801082   10012 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0524 20:06:10.801082   10012 kubeadm.go:636] restartCluster start
	I0524 20:06:10.811686   10012 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0524 20:06:10.843193   10012 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:10.843880   10012 kubeconfig.go:92] found "pause-893100" server: "https://172.27.136.175:8443"
	I0524 20:06:10.845890   10012 kapi.go:59] client config for pause-893100: &rest.Config{Host:"https://172.27.136.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 20:06:10.856500   10012 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0524 20:06:10.883825   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:10.893143   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:10.920443   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.421350   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:11.430355   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:11.451625   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.929976   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:11.939775   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:11.970377   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:12.436501   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:12.447814   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:12.471467   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:12.926501   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:12.937214   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:12.960579   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:13.427159   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:13.438876   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:13.464698   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:13.929541   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:13.947415   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:13.969612   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:14.432643   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:14.442446   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:14.467022   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:14.925651   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:14.938049   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:14.962986   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:11.632161    4556 kubeadm.go:322] W0524 20:06:11.628114    1529 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0524 20:06:15.430640   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:15.441565   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:15.463640   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:15.932254   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:15.947114   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:15.968471   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:16.434507   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:16.449727   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:16.473779   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:16.923657   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:16.934780   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:16.956350   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:17.429985   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:17.441030   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:17.464322   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:17.931369   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:17.941210   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:17.966233   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:18.435534   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:18.446265   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:18.467386   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:18.923047   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:18.934000   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:18.958043   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:19.427096   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:19.438136   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:19.462070   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:19.928866   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:19.938699   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:19.960165   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:24.351871    4556 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0524 20:06:24.351965    4556 kubeadm.go:322] [preflight] Running pre-flight checks
	I0524 20:06:24.352240    4556 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0524 20:06:24.352574    4556 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0524 20:06:24.352756    4556 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0524 20:06:24.352990    4556 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0524 20:06:24.355726    4556 out.go:204]   - Generating certificates and keys ...
	I0524 20:06:24.355726    4556 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0524 20:06:24.355726    4556 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost nokubernetes-893100] and IPs [172.27.134.18 127.0.0.1 ::1]
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0524 20:06:24.356706    4556 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost nokubernetes-893100] and IPs [172.27.134.18 127.0.0.1 ::1]
	I0524 20:06:24.357726    4556 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0524 20:06:24.357726    4556 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0524 20:06:24.357726    4556 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0524 20:06:24.357726    4556 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0524 20:06:24.358732    4556 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0524 20:06:24.362720    4556 out.go:204]   - Booting up control plane ...
	I0524 20:06:24.362720    4556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0524 20:06:24.362720    4556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0524 20:06:24.362720    4556 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0524 20:06:24.362720    4556 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0524 20:06:24.363750    4556 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0524 20:06:24.363750    4556 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.003819 seconds
	I0524 20:06:24.363750    4556 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0524 20:06:24.363750    4556 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0524 20:06:24.363750    4556 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0524 20:06:24.364733    4556 kubeadm.go:322] [mark-control-plane] Marking the node nokubernetes-893100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0524 20:06:24.364733    4556 kubeadm.go:322] [bootstrap-token] Using token: vpphdh.2ag8sqvvjsk8wehw
	I0524 20:06:24.367712    4556 out.go:204]   - Configuring RBAC rules ...
	I0524 20:06:24.367712    4556 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0524 20:06:24.367712    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0524 20:06:24.368747    4556 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0524 20:06:24.369771    4556 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0524 20:06:24.369771    4556 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0524 20:06:24.369771    4556 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0524 20:06:24.369771    4556 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0524 20:06:24.369771    4556 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.369771    4556 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0524 20:06:24.369771    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0524 20:06:24.370733    4556 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0524 20:06:24.370733    4556 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0524 20:06:24.370733    4556 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vpphdh.2ag8sqvvjsk8wehw \
	I0524 20:06:24.370733    4556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 \
	I0524 20:06:24.370733    4556 kubeadm.go:322] 	--control-plane 
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0524 20:06:24.370733    4556 kubeadm.go:322] 
	I0524 20:06:24.370733    4556 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vpphdh.2ag8sqvvjsk8wehw \
	I0524 20:06:24.370733    4556 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:dee5b84add844e72bbc0f909786761c65e4f14cf428ccb3d330958e040a6a6b9 
	I0524 20:06:24.370733    4556 cni.go:84] Creating CNI manager for ""
	I0524 20:06:24.370733    4556 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:24.373704    4556 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 20:06:20.433357   10012 api_server.go:166] Checking apiserver status ...
	I0524 20:06:20.444510   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0524 20:06:20.474481   10012 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:20.890580   10012 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0524 20:06:20.890652   10012 kubeadm.go:1123] stopping kube-system containers ...
	I0524 20:06:20.900473   10012 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0524 20:06:20.945688   10012 docker.go:459] Stopping containers: [ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296]
	I0524 20:06:20.953822   10012 ssh_runner.go:195] Run: docker stop ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296
	I0524 20:06:24.387732    4556 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 20:06:24.413737    4556 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 20:06:24.460770    4556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 20:06:24.470743    4556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e minikube.k8s.io/name=NoKubernetes-893100 minikube.k8s.io/updated_at=2023_05_24T20_06_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 20:06:24.471712    4556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0524 20:06:24.559407    4556 ops.go:34] apiserver oom_adj: -16
	I0524 20:06:24.979604    4556 kubeadm.go:1076] duration metric: took 518.8342ms to wait for elevateKubeSystemPrivileges.
	I0524 20:06:24.979604    4556 kubeadm.go:406] StartCluster complete in 18.8747661s
	I0524 20:06:24.979604    4556 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:24.979604    4556 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:06:24.981234    4556 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:24.983214    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 20:06:24.983214    4556 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0524 20:06:24.983214    4556 addons.go:66] Setting storage-provisioner=true in profile "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 addons.go:228] Setting addon storage-provisioner=true in "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 addons.go:66] Setting default-storageclass=true in profile "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "NoKubernetes-893100"
	I0524 20:06:24.983214    4556 host.go:66] Checking if "NoKubernetes-893100" exists ...
	I0524 20:06:24.983214    4556 config.go:182] Loaded profile config "NoKubernetes-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:24.984213    4556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-893100 ).state
	I0524 20:06:24.984213    4556 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM NoKubernetes-893100 ).state
	I0524 20:06:25.211785    4556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.27.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0524 20:06:25.563981    4556 kapi.go:248] "coredns" deployment in "kube-system" namespace and "NoKubernetes-893100" context rescaled to 1 replicas
	I0524 20:06:25.563981    4556 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.134.18 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 20:06:25.568118    4556 out.go:177] * Verifying Kubernetes components...
	I0524 20:06:26.340931   10012 ssh_runner.go:235] Completed: docker stop ce5ccdf7db90 e32e5e876d0b 2fdde93d5dbe f47bd4ea62c0 942ff1fce0b3 f729780e2503 4c6a62dd6d30 154b072e6b05 9c38b6ba36d7 251fb037891f 9fdc40d1ef26 25915e591443 cdc9a8b35153 cb84e1827ba6 d4b4d742aac0 081fe0ce4189 b7b77095ef5a 81a385409af8 f933432635d0 2fab2362d925 eb3fae8732d0 72ae2d8f679f b2a0e99efa06 d9880fb68a45 f6507871d53c 9157454e4296: (5.387112s)
	I0524 20:06:26.353946   10012 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0524 20:06:26.415998   10012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0524 20:06:26.435993   10012 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 May 24 20:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 May 24 20:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 May 24 20:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 May 24 20:03 /etc/kubernetes/scheduler.conf
	
	I0524 20:06:26.454002   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0524 20:06:26.488848   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0524 20:06:26.521267   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0524 20:06:26.538077   10012 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:26.548788   10012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0524 20:06:26.582717   10012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0524 20:06:26.611300   10012 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0524 20:06:26.636218   10012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0524 20:06:26.674616   10012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:26.696923   10012 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0524 20:06:26.696923   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:26.808540   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:28.718776   10012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.910239s)
	I0524 20:06:28.718887   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.128272   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.291393   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:29.441004   10012 api_server.go:52] waiting for apiserver process to appear ...
	I0524 20:06:29.456788   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 20:06:29.478798   10012 api_server.go:72] duration metric: took 37.7941ms to wait for apiserver process to appear ...
	I0524 20:06:29.478798   10012 api_server.go:88] waiting for apiserver healthz status ...
	I0524 20:06:29.478798   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:26.560253   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 20:06:26.560426   10868 machine.go:91] provisioned docker machine in 32.1769885s
	I0524 20:06:26.560426   10868 start.go:300] post-start starting for "running-upgrade-893100" (driver="hyperv")
	I0524 20:06:26.560426   10868 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 20:06:26.575492   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 20:06:26.575492   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:27.435244   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:28.838356   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:28.838356   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:28.838933   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:28.959761   10868 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.3841871s)
	I0524 20:06:28.972843   10868 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 20:06:28.980533   10868 info.go:137] Remote host: Buildroot 2019.02.7
	I0524 20:06:28.980613   10868 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 20:06:28.980976   10868 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 20:06:28.982117   10868 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 20:06:28.995954   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 20:06:29.013022   10868 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 20:06:29.050228   10868 start.go:303] post-start completed in 2.4898051s
	I0524 20:06:29.050228   10868 fix.go:57] fixHost completed within 36.3092251s
	I0524 20:06:29.050228   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:29.975618   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:29.975688   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:29.975688   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:33.972428    7000 start.go:368] acquired machines lock for "force-systemd-flag-052200" in 31.6477009s
	I0524 20:06:33.972428    7000 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-052200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-052200 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 20:06:33.972428    7000 start.go:125] createHost starting for "" (driver="hyperv")
	I0524 20:06:31.397968   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:31.398200   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:31.402578   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:31.403192   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:31.403192   10868 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0524 20:06:31.691038   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684958791.670023101
	
	I0524 20:06:31.691103   10868 fix.go:207] guest clock: 1684958791.670023101
	I0524 20:06:31.691103   10868 fix.go:220] Guest: 2023-05-24 20:06:31.670023101 +0000 UTC Remote: 2023-05-24 20:06:29.0502283 +0000 UTC m=+109.032679301 (delta=2.619794801s)
	I0524 20:06:31.691177   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:32.544937   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:32.545011   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:32.545381   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:33.789345   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:33.789403   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:33.795876   10868 main.go:141] libmachine: Using SSH client type: native
	I0524 20:06:33.796952   10868 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.134.82 22 <nil> <nil>}
	I0524 20:06:33.797016   10868 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1684958791
	I0524 20:06:33.971448   10868 main.go:141] libmachine: SSH cmd err, output: <nil>: Wed May 24 20:06:31 UTC 2023
	
	I0524 20:06:33.971448   10868 fix.go:227] clock set: Wed May 24 20:06:31 UTC 2023
	 (err=<nil>)
	I0524 20:06:33.971448   10868 start.go:83] releasing machines lock for "running-upgrade-893100", held for 41.2304529s
	I0524 20:06:33.972428   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:34.889266   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:34.889266   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:34.889266   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:34.493381   10012 api_server.go:269] stopped: https://172.27.136.175:8443/healthz: Get "https://172.27.136.175:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0524 20:06:34.998973   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:35.017204   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 20:06:35.017271   10012 api_server.go:103] status: https://172.27.136.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 20:06:33.976431    7000 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0524 20:06:33.976431    7000 start.go:159] libmachine.API.Create for "force-systemd-flag-052200" (driver="hyperv")
	I0524 20:06:33.976431    7000 client.go:168] LocalClient.Create starting
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Decoding PEM data...
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Parsing certificate...
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Decoding PEM data...
	I0524 20:06:33.977435    7000 main.go:141] libmachine: Parsing certificate...
	I0524 20:06:33.978420    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0524 20:06:34.493381    7000 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0524 20:06:34.493381    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:34.493381    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0524 20:06:35.503294   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:35.519446   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 20:06:35.519446   10012 api_server.go:103] status: https://172.27.136.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 20:06:36.006618   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:36.020613   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0524 20:06:36.020613   10012 api_server.go:103] status: https://172.27.136.175:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0524 20:06:36.496433   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:36.510434   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 200:
	ok
	I0524 20:06:36.539217   10012 api_server.go:141] control plane version: v1.27.2
	I0524 20:06:36.539217   10012 api_server.go:131] duration metric: took 7.06043s to wait for apiserver health ...
	I0524 20:06:36.539217   10012 cni.go:84] Creating CNI manager for ""
	I0524 20:06:36.539217   10012 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 20:06:36.543083   10012 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0524 20:06:36.566890   10012 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0524 20:06:36.594851   10012 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0524 20:06:36.742044   10012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 20:06:36.793990   10012 system_pods.go:59] 6 kube-system pods found
	I0524 20:06:36.793990   10012 system_pods.go:61] "coredns-5d78c9869d-ngwxf" [5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0524 20:06:36.793990   10012 system_pods.go:61] "etcd-pause-893100" [042c47b3-76c5-49a8-be92-2eece9ec9522] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-apiserver-pause-893100" [22d4a079-779f-458c-b323-4c7f578ddf80] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-controller-manager-pause-893100" [01772675-fb9c-4142-ac0d-984ba9d4c05f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-proxy-c5vrt" [4372194d-1a11-4f50-97a2-a9b8863e1d2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0524 20:06:36.793990   10012 system_pods.go:61] "kube-scheduler-pause-893100" [e18658f2-46b9-4808-a66b-0b99af639027] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0524 20:06:36.793990   10012 system_pods.go:74] duration metric: took 51.9464ms to wait for pod list to return data ...
	I0524 20:06:36.793990   10012 node_conditions.go:102] verifying NodePressure condition ...
	I0524 20:06:36.814990   10012 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 20:06:36.814990   10012 node_conditions.go:123] node cpu capacity is 2
	I0524 20:06:36.814990   10012 node_conditions.go:105] duration metric: took 21.0001ms to run NodePressure ...
	I0524 20:06:36.814990   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0524 20:06:37.878826   10012 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.0638371s)
	I0524 20:06:37.878826   10012 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0524 20:06:37.892821   10012 kubeadm.go:787] kubelet initialised
	I0524 20:06:37.892821   10012 kubeadm.go:788] duration metric: took 13.9948ms waiting for restarted kubelet to initialise ...
	I0524 20:06:37.892821   10012 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:37.909538   10012 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:39.475468   10012 pod_ready.go:92] pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:39.475468   10012 pod_ready.go:81] duration metric: took 1.5659319s waiting for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:39.475468   10012 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:36.338371   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:36.338619   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:36.342252   10868 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 20:06:36.343221   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:36.353265   10868 ssh_runner.go:195] Run: cat /version.json
	I0524 20:06:36.353265   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-893100 ).state
	I0524 20:06:37.368370   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:37.368370   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:37.368370   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:37.406338   10868 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:37.406338   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:37.406338   10868 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-893100 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:38.901480   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:38.901480   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:38.902494   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:38.996335   10868 main.go:141] libmachine: [stdout =====>] : 172.27.134.82
	
	I0524 20:06:38.996335   10868 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:38.996335   10868 sshutil.go:53] new ssh client: &{IP:172.27.134.82 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\running-upgrade-893100\id_rsa Username:docker}
	I0524 20:06:39.034420   10868 ssh_runner.go:235] Completed: cat /version.json: (2.6811586s)
	W0524 20:06:39.034420   10868 start.go:409] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0524 20:06:39.048577   10868 ssh_runner.go:195] Run: systemctl --version
	I0524 20:06:39.087442   10868 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 20:06:39.172435   10868 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 20:06:39.172435   10868 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.8301879s)
	I0524 20:06:39.190434   10868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0524 20:06:39.221212   10868 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0524 20:06:39.230789   10868 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0524 20:06:39.230789   10868 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0524 20:06:39.230789   10868 start.go:481] detecting cgroup driver to use...
	I0524 20:06:39.231791   10868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:06:39.256768   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0524 20:06:39.278742   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 20:06:39.290371   10868 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 20:06:39.306364   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 20:06:39.341273   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:06:39.366461   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 20:06:39.395843   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:06:39.431834   10868 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 20:06:39.469460   10868 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 20:06:39.528350   10868 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 20:06:39.559807   10868 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 20:06:39.583690   10868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:39.922278   10868 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 20:06:39.960021   10868 start.go:481] detecting cgroup driver to use...
	I0524 20:06:39.978881   10868 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 20:06:40.010550   10868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:06:40.036238   10868 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 20:06:40.110050   10868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:06:40.139694   10868 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:06:40.157646   10868 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:06:40.187302   10868 ssh_runner.go:195] Run: which cri-dockerd
	I0524 20:06:35.360287    7000 main.go:141] libmachine: [stdout =====>] : False
	
	I0524 20:06:35.360287    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:35.360287    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0524 20:06:36.022609    7000 main.go:141] libmachine: [stdout =====>] : True
	
	I0524 20:06:36.022609    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:36.022609    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0524 20:06:38.316496    7000 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0524 20:06:38.316583    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:38.319761    7000 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube1/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.30.1-1684536668-16501-amd64.iso...
	I0524 20:06:38.854487    7000 main.go:141] libmachine: Creating SSH key...
	I0524 20:06:39.080432    7000 main.go:141] libmachine: Creating VM...
	I0524 20:06:39.080432    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0524 20:06:41.508151   10012 pod_ready.go:102] pod "etcd-pause-893100" in "kube-system" namespace has status "Ready":"False"
	I0524 20:06:43.512930   10012 pod_ready.go:92] pod "etcd-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.512930   10012 pod_ready.go:81] duration metric: took 4.0374679s waiting for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.512930   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.530172   10012 pod_ready.go:92] pod "kube-apiserver-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.530172   10012 pod_ready.go:81] duration metric: took 17.2413ms waiting for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.530172   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.539962   10012 pod_ready.go:92] pod "kube-controller-manager-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.539962   10012 pod_ready.go:81] duration metric: took 9.7904ms waiting for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.539962   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.549562   10012 pod_ready.go:92] pod "kube-proxy-c5vrt" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.549562   10012 pod_ready.go:81] duration metric: took 9.6002ms waiting for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.549562   10012 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.558572   10012 pod_ready.go:92] pod "kube-scheduler-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:43.558572   10012 pod_ready.go:81] duration metric: took 9.0098ms waiting for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:43.558572   10012 pod_ready.go:38] duration metric: took 5.6657589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:43.559557   10012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0524 20:06:43.577578   10012 ops.go:34] apiserver oom_adj: -16
	I0524 20:06:43.578578   10012 kubeadm.go:640] restartCluster took 32.7775281s
	I0524 20:06:43.578578   10012 kubeadm.go:406] StartCluster complete in 32.8611452s
	I0524 20:06:43.578578   10012 settings.go:142] acquiring lock: {Name:mkab556291043b7dcd90a9d60c03aa7fa181e125 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:43.578578   10012 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:06:43.579549   10012 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube1\minikube-integration\kubeconfig: {Name:mk2e2755bd0ffee2cfcc8bbf22c26f99d53697ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0524 20:06:43.581558   10012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0524 20:06:43.581558   10012 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0524 20:06:43.584564   10012 out.go:177] * Enabled addons: 
	I0524 20:06:43.581558   10012 config.go:182] Loaded profile config "pause-893100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 20:06:43.587581   10012 addons.go:499] enable addons completed in 6.0225ms: enabled=[]
	I0524 20:06:43.594568   10012 kapi.go:59] client config for pause-893100: &rest.Config{Host:"https://172.27.136.175:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\profiles\\pause-893100\\client.key", CAFile:"C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20cb120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0524 20:06:43.600557   10012 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-893100" context rescaled to 1 replicas
	I0524 20:06:43.600557   10012 start.go:223] Will wait 6m0s for node &{Name: IP:172.27.136.175 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0524 20:06:43.604566   10012 out.go:177] * Verifying Kubernetes components...
	I0524 20:06:43.619566   10012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 20:06:43.740039   10012 node_ready.go:35] waiting up to 6m0s for node "pause-893100" to be "Ready" ...
	I0524 20:06:43.740039   10012 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0524 20:06:43.745039   10012 node_ready.go:49] node "pause-893100" has status "Ready":"True"
	I0524 20:06:43.745039   10012 node_ready.go:38] duration metric: took 5.0003ms waiting for node "pause-893100" to be "Ready" ...
	I0524 20:06:43.745039   10012 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:43.928187   10012 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.321285   10012 pod_ready.go:92] pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:44.321360   10012 pod_ready.go:81] duration metric: took 392.9478ms waiting for pod "coredns-5d78c9869d-ngwxf" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.321360   10012 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.714492   10012 pod_ready.go:92] pod "etcd-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:44.714492   10012 pod_ready.go:81] duration metric: took 393.1323ms waiting for pod "etcd-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:44.714492   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.145919   10012 pod_ready.go:92] pod "kube-apiserver-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:45.145919   10012 pod_ready.go:81] duration metric: took 431.4269ms waiting for pod "kube-apiserver-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.145919   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:40.203647   10868 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 20:06:40.217401   10868 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 20:06:40.246006   10868 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 20:06:40.528783   10868 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 20:06:40.824730   10868 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 20:06:40.824730   10868 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 20:06:40.859463   10868 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:06:41.276580   10868 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 20:06:40.942714    7000 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0524 20:06:40.942714    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:40.942860    7000 main.go:141] libmachine: Using switch "Default Switch"
	I0524 20:06:40.942968    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0524 20:06:41.747208    7000 main.go:141] libmachine: [stdout =====>] : True
	
	I0524 20:06:41.747286    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:41.747286    7000 main.go:141] libmachine: Creating VHD
	I0524 20:06:41.747286    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0524 20:06:43.547605    7000 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube1
	Path                    : C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : DF99FB25-92D3-4C42-96BF-167E883A3511
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0524 20:06:43.547605    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:43.547605    7000 main.go:141] libmachine: Writing magic tar header
	I0524 20:06:43.547605    7000 main.go:141] libmachine: Writing SSH key tar header
	I0524 20:06:43.556553    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0524 20:06:45.513680   10012 pod_ready.go:92] pod "kube-controller-manager-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:45.513680   10012 pod_ready.go:81] duration metric: took 367.7614ms waiting for pod "kube-controller-manager-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.513680   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.908936   10012 pod_ready.go:92] pod "kube-proxy-c5vrt" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:45.908936   10012 pod_ready.go:81] duration metric: took 395.2555ms waiting for pod "kube-proxy-c5vrt" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:45.908936   10012 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:46.322680   10012 pod_ready.go:92] pod "kube-scheduler-pause-893100" in "kube-system" namespace has status "Ready":"True"
	I0524 20:06:46.322680   10012 pod_ready.go:81] duration metric: took 413.7445ms waiting for pod "kube-scheduler-pause-893100" in "kube-system" namespace to be "Ready" ...
	I0524 20:06:46.322680   10012 pod_ready.go:38] duration metric: took 2.5776422s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0524 20:06:46.322680   10012 api_server.go:52] waiting for apiserver process to appear ...
	I0524 20:06:46.333455   10012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 20:06:46.357073   10012 api_server.go:72] duration metric: took 2.756517s to wait for apiserver process to appear ...
	I0524 20:06:46.357073   10012 api_server.go:88] waiting for apiserver healthz status ...
	I0524 20:06:46.357073   10012 api_server.go:253] Checking apiserver healthz at https://172.27.136.175:8443/healthz ...
	I0524 20:06:46.365971   10012 api_server.go:279] https://172.27.136.175:8443/healthz returned 200:
	ok
	I0524 20:06:46.369815   10012 api_server.go:141] control plane version: v1.27.2
	I0524 20:06:46.369927   10012 api_server.go:131] duration metric: took 12.8544ms to wait for apiserver health ...
	I0524 20:06:46.369927   10012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0524 20:06:46.511715   10012 system_pods.go:59] 6 kube-system pods found
	I0524 20:06:46.511715   10012 system_pods.go:61] "coredns-5d78c9869d-ngwxf" [5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "etcd-pause-893100" [042c47b3-76c5-49a8-be92-2eece9ec9522] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-apiserver-pause-893100" [22d4a079-779f-458c-b323-4c7f578ddf80] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-controller-manager-pause-893100" [01772675-fb9c-4142-ac0d-984ba9d4c05f] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-proxy-c5vrt" [4372194d-1a11-4f50-97a2-a9b8863e1d2e] Running
	I0524 20:06:46.511787   10012 system_pods.go:61] "kube-scheduler-pause-893100" [e18658f2-46b9-4808-a66b-0b99af639027] Running
	I0524 20:06:46.511787   10012 system_pods.go:74] duration metric: took 141.8598ms to wait for pod list to return data ...
	I0524 20:06:46.511787   10012 default_sa.go:34] waiting for default service account to be created ...
	I0524 20:06:46.711419   10012 default_sa.go:45] found service account: "default"
	I0524 20:06:46.711419   10012 default_sa.go:55] duration metric: took 199.6318ms for default service account to be created ...
	I0524 20:06:46.711941   10012 system_pods.go:116] waiting for k8s-apps to be running ...
	I0524 20:06:46.921968   10012 system_pods.go:86] 6 kube-system pods found
	I0524 20:06:46.921968   10012 system_pods.go:89] "coredns-5d78c9869d-ngwxf" [5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "etcd-pause-893100" [042c47b3-76c5-49a8-be92-2eece9ec9522] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-apiserver-pause-893100" [22d4a079-779f-458c-b323-4c7f578ddf80] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-controller-manager-pause-893100" [01772675-fb9c-4142-ac0d-984ba9d4c05f] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-proxy-c5vrt" [4372194d-1a11-4f50-97a2-a9b8863e1d2e] Running
	I0524 20:06:46.921968   10012 system_pods.go:89] "kube-scheduler-pause-893100" [e18658f2-46b9-4808-a66b-0b99af639027] Running
	I0524 20:06:46.921968   10012 system_pods.go:126] duration metric: took 210.0272ms to wait for k8s-apps to be running ...
	I0524 20:06:46.921968   10012 system_svc.go:44] waiting for kubelet service to be running ....
	I0524 20:06:46.933969   10012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 20:06:46.963249   10012 system_svc.go:56] duration metric: took 41.2803ms WaitForService to wait for kubelet.
	I0524 20:06:46.963249   10012 kubeadm.go:581] duration metric: took 3.362693s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0524 20:06:46.963249   10012 node_conditions.go:102] verifying NodePressure condition ...
	I0524 20:06:47.121890   10012 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0524 20:06:47.121890   10012 node_conditions.go:123] node cpu capacity is 2
	I0524 20:06:47.122534   10012 node_conditions.go:105] duration metric: took 159.2853ms to run NodePressure ...
	I0524 20:06:47.122658   10012 start.go:228] waiting for startup goroutines ...
	I0524 20:06:47.122658   10012 start.go:233] waiting for cluster config update ...
	I0524 20:06:47.122658   10012 start.go:242] writing updated cluster config ...
	I0524 20:06:47.134855   10012 ssh_runner.go:195] Run: rm -f paused
	I0524 20:06:47.353227   10012 start.go:568] kubectl: 1.18.2, cluster: 1.27.2 (minor skew: 9)
	I0524 20:06:47.356345   10012 out.go:177] 
	W0524 20:06:47.359179   10012 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.2.
	I0524 20:06:47.364178   10012 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0524 20:06:47.367185   10012 out.go:177] * Done! kubectl is now configured to use "pause-893100" cluster and "default" namespace by default
	I0524 20:06:45.417115    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:45.417216    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:45.417326    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\disk.vhd' -SizeBytes 20000MB
	I0524 20:06:46.675013    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:46.675013    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:46.675013    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM force-systemd-flag-052200 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0524 20:06:49.019653    7000 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	force-systemd-flag-052200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0524 20:06:49.019805    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:49.019805    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName force-systemd-flag-052200 -DynamicMemoryEnabled $false
	I0524 20:06:49.991249    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:49.991249    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:49.991249    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor force-systemd-flag-052200 -Count 2
	I0524 20:06:53.489976   10868 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.2134033s)
	I0524 20:06:53.492790   10868 out.go:177] 
	W0524 20:06:53.495379   10868 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0524 20:06:53.495379   10868 out.go:239] * 
	W0524 20:06:53.497289   10868 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 20:06:53.499747   10868 out.go:177] 
	I0524 20:06:50.955360    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:50.955567    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:50.955643    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName force-systemd-flag-052200 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\boot2docker.iso'
	I0524 20:06:52.341406    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:52.341686    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:52.341686    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName force-systemd-flag-052200 -Path 'C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\force-systemd-flag-052200\disk.vhd'
	I0524 20:06:53.812765    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:53.812765    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:53.812765    7000 main.go:141] libmachine: Starting VM...
	I0524 20:06:53.812765    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM force-systemd-flag-052200
	I0524 20:06:55.888658    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:55.888736    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:55.888736    7000 main.go:141] libmachine: Waiting for host to start...
	I0524 20:06:55.888736    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-052200 ).state
	I0524 20:06:56.863702    7000 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:06:56.863985    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:56.864064    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-052200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:06:58.207114    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:06:58.207386    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:06:59.220896    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-052200 ).state
	I0524 20:07:00.214387    7000 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:07:00.214462    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:07:00.214462    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-052200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:07:01.477622    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:07:01.477863    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:07:02.493245    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-052200 ).state
	I0524 20:07:03.471507    7000 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:07:03.471507    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:07:03.471507    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-052200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:07:04.778198    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:07:04.778379    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:07:05.783317    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-052200 ).state
	I0524 20:07:06.764172    7000 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:07:06.764172    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:07:06.764276    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-052200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:07:08.087517    7000 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:07:08.087554    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:07:09.101791    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM force-systemd-flag-052200 ).state
	I0524 20:07:09.951943    7000 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:07:09.952012    7000 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:07:09.952012    7000 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM force-systemd-flag-052200 ).networkadapters[0]).ipaddresses[0]
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-05-24 20:02:08 UTC, ends at Wed 2023-05-24 20:07:17 UTC. --
	May 24 20:06:26 pause-893100 dockerd[6221]: time="2023-05-24T20:06:26.213514084Z" level=info msg="cleaning up dead shim" namespace=moby
	May 24 20:06:26 pause-893100 cri-dockerd[6807]: W0524 20:06:26.394085    6807 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	May 24 20:06:29 pause-893100 cri-dockerd[6807]: time="2023-05-24T20:06:29Z" level=error msg="Failed to retrieve checkpoint for sandbox 81a385409af85a997db4aaacc80ab35e009a7d677fc84f26a5cfba52b9eabf1a: checkpoint is not found"
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.378231948Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.378504243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.378547743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.378565442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.411302259Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.412410839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.412515037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:30 pause-893100 dockerd[6221]: time="2023-05-24T20:06:30.412609236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:34 pause-893100 cri-dockerd[6807]: time="2023-05-24T20:06:34Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 24 20:06:35 pause-893100 dockerd[6221]: time="2023-05-24T20:06:35.986241824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:35 pause-893100 dockerd[6221]: time="2023-05-24T20:06:35.986774115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:35 pause-893100 dockerd[6221]: time="2023-05-24T20:06:35.987020312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:35 pause-893100 dockerd[6221]: time="2023-05-24T20:06:35.987142510Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:36 pause-893100 dockerd[6221]: time="2023-05-24T20:06:36.152748208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:36 pause-893100 dockerd[6221]: time="2023-05-24T20:06:36.153251500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:36 pause-893100 dockerd[6221]: time="2023-05-24T20:06:36.154403682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:36 pause-893100 dockerd[6221]: time="2023-05-24T20:06:36.154803676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:37 pause-893100 cri-dockerd[6807]: time="2023-05-24T20:06:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4552fc2992751f81cdc3c57cde81cde90bedb5779501d12180b8f47e264dcc73/resolv.conf as [nameserver 172.27.128.1]"
	May 24 20:06:38 pause-893100 dockerd[6221]: time="2023-05-24T20:06:38.263573924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 24 20:06:38 pause-893100 dockerd[6221]: time="2023-05-24T20:06:38.263947518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 24 20:06:38 pause-893100 dockerd[6221]: time="2023-05-24T20:06:38.264155815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 24 20:06:38 pause-893100 dockerd[6221]: time="2023-05-24T20:06:38.264289413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	e5a274f56a9e2       ead0a4a53df89       41 seconds ago       Running             coredns                   2                   4552fc2992751
	1fd1529e00654       b8aa50768fd67       44 seconds ago       Running             kube-proxy                2                   f4cbf7412a21e
	e66fc6f6ecd48       86b6af7dd652c       49 seconds ago       Running             etcd                      2                   55ff062f84dac
	5f0b4eeb6d53e       ac2b7465ebba9       49 seconds ago       Running             kube-controller-manager   2                   7c7f06536601c
	ef570771b6adc       c5b13e4f7806d       56 seconds ago       Running             kube-apiserver            2                   0992a99facc03
	8100a9e6f30a9       89e70da428d29       56 seconds ago       Running             kube-scheduler            2                   9a69e81f88970
	ebdf7873fb71a       ac2b7465ebba9       58 seconds ago       Created             kube-controller-manager   1                   ce5ccdf7db908
	e32e5e876d0b4       b8aa50768fd67       About a minute ago   Exited              kube-proxy                1                   2fdde93d5dbe1
	f47bd4ea62c00       86b6af7dd652c       About a minute ago   Exited              etcd                      1                   942ff1fce0b3f
	f729780e25038       ead0a4a53df89       About a minute ago   Exited              coredns                   1                   4c6a62dd6d30f
	cdc9a8b351539       c5b13e4f7806d       About a minute ago   Created             kube-apiserver            1                   cb84e1827ba60
	d4b4d742aac0e       89e70da428d29       About a minute ago   Exited              kube-scheduler            1                   081fe0ce41891
	
	* 
	* ==> coredns [e5a274f56a9e] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = e68c1f66d66a8b21178767f77ec9bbf4538be12549e49c63ad565269f31e317fbc64a6eb8980e12bd093747c3f544a0bc7c04266dffb836ae54229446b5ea471
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55954 - 36367 "HINFO IN 3852336394710323909.3816590159474463866. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.061967461s
	
	* 
	* ==> coredns [f729780e2503] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = e68c1f66d66a8b21178767f77ec9bbf4538be12549e49c63ad565269f31e317fbc64a6eb8980e12bd093747c3f544a0bc7c04266dffb836ae54229446b5ea471
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36600 - 9188 "HINFO IN 3567950089693108609.1612163727084401078. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.066989351s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-893100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-893100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d23ad66c17ded3bf1d7d6fb0fa0ee29881f9547e
	                    minikube.k8s.io/name=pause-893100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_05_24T20_03_31_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 24 May 2023 20:03:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-893100
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 24 May 2023 20:07:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 24 May 2023 20:06:34 +0000   Wed, 24 May 2023 20:03:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 24 May 2023 20:06:34 +0000   Wed, 24 May 2023 20:03:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 24 May 2023 20:06:34 +0000   Wed, 24 May 2023 20:03:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 24 May 2023 20:06:34 +0000   Wed, 24 May 2023 20:03:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.27.136.175
	  Hostname:    pause-893100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	System Info:
	  Machine ID:                 471d3a12d2fe4889a2e10df10898b515
	  System UUID:                bbf3149d-6008-ce47-9412-ff63c665df4c
	  Boot ID:                    faf33dd5-7445-44d1-b73c-9650c37d87a8
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-ngwxf                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m36s
	  kube-system                 etcd-pause-893100                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-apiserver-pause-893100             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-controller-manager-pause-893100    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  kube-system                 kube-proxy-c5vrt                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-scheduler-pause-893100             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m35s                kube-proxy       
	  Normal  Starting                 45s                  kube-proxy       
	  Normal  Starting                 4m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node pause-893100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node pause-893100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node pause-893100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     3m51s                kubelet          Node pause-893100 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  3m51s                kubelet          Node pause-893100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m51s                kubelet          Node pause-893100 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  3m51s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m51s                kubelet          Starting kubelet.
	  Normal  NodeReady                3m45s                kubelet          Node pause-893100 status is now: NodeReady
	  Normal  RegisteredNode           3m39s                node-controller  Node pause-893100 event: Registered Node pause-893100 in Controller
	  Normal  Starting                 52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)    kubelet          Node pause-893100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)    kubelet          Node pause-893100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x7 over 52s)    kubelet          Node pause-893100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           34s                  node-controller  Node pause-893100 event: Registered Node pause-893100 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.756962] systemd-fstab-generator[1069]: Ignoring "noauto" for root device
	[  +0.631769] systemd-fstab-generator[1107]: Ignoring "noauto" for root device
	[  +0.166505] systemd-fstab-generator[1118]: Ignoring "noauto" for root device
	[  +0.197705] systemd-fstab-generator[1131]: Ignoring "noauto" for root device
	[  +1.859023] systemd-fstab-generator[1278]: Ignoring "noauto" for root device
	[  +0.184859] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[  +0.206936] systemd-fstab-generator[1300]: Ignoring "noauto" for root device
	[  +0.178068] systemd-fstab-generator[1311]: Ignoring "noauto" for root device
	[  +0.248119] systemd-fstab-generator[1325]: Ignoring "noauto" for root device
	[  +8.468329] systemd-fstab-generator[1585]: Ignoring "noauto" for root device
	[  +1.018259] kauditd_printk_skb: 68 callbacks suppressed
	[ +14.412599] systemd-fstab-generator[2759]: Ignoring "noauto" for root device
	[ +24.906887] kauditd_printk_skb: 30 callbacks suppressed
	[May24 20:05] systemd-fstab-generator[5586]: Ignoring "noauto" for root device
	[  +0.619768] systemd-fstab-generator[5621]: Ignoring "noauto" for root device
	[  +0.287909] systemd-fstab-generator[5632]: Ignoring "noauto" for root device
	[  +0.292147] systemd-fstab-generator[5645]: Ignoring "noauto" for root device
	[May24 20:06] systemd-fstab-generator[6526]: Ignoring "noauto" for root device
	[  +0.246390] systemd-fstab-generator[6588]: Ignoring "noauto" for root device
	[  +0.273622] systemd-fstab-generator[6607]: Ignoring "noauto" for root device
	[  +0.258422] systemd-fstab-generator[6682]: Ignoring "noauto" for root device
	[  +0.397514] systemd-fstab-generator[6751]: Ignoring "noauto" for root device
	[  +2.062841] kauditd_printk_skb: 34 callbacks suppressed
	[ +15.872825] kauditd_printk_skb: 11 callbacks suppressed
	[  +2.667936] systemd-fstab-generator[8335]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [e66fc6f6ecd4] <==
	* {"level":"info","ts":"2023-05-24T20:06:32.255Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b55612564fca3578","local-member-attributes":"{Name:pause-893100 ClientURLs:[https://172.27.136.175:2379]}","request-path":"/0/members/b55612564fca3578/attributes","cluster-id":"b3c6091156c933b8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T20:06:32.255Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T20:06:32.255Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T20:06:32.256Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T20:06:32.257Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.27.136.175:2379"}
	{"level":"info","ts":"2023-05-24T20:06:32.257Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T20:06:32.257Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T20:07:15.973Z","caller":"traceutil/trace.go:171","msg":"trace[376121082] transaction","detail":"{read_only:false; response_revision:532; number_of_response:1; }","duration":"308.874336ms","start":"2023-05-24T20:07:15.664Z","end":"2023-05-24T20:07:15.973Z","steps":["trace[376121082] 'process raft request'  (duration: 308.643839ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T20:07:15.974Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-05-24T20:07:15.664Z","time spent":"309.056435ms","remote":"127.0.0.1:34852","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":537,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/pause-893100\" mod_revision:529 > success:<request_put:<key:\"/registry/leases/kube-node-lease/pause-893100\" value_size:484 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/pause-893100\" > >"}
	{"level":"warn","ts":"2023-05-24T20:07:16.433Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.047386ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3852979355689255537 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-qhkidx6tddeaj3b4st6l25ll3e\" mod_revision:530 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-qhkidx6tddeaj3b4st6l25ll3e\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-qhkidx6tddeaj3b4st6l25ll3e\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-05-24T20:07:16.433Z","caller":"traceutil/trace.go:171","msg":"trace[1232436786] linearizableReadLoop","detail":"{readStateIndex:598; appliedIndex:597; }","duration":"364.634502ms","start":"2023-05-24T20:07:16.069Z","end":"2023-05-24T20:07:16.433Z","steps":["trace[1232436786] 'read index received'  (duration: 111.15692ms)","trace[1232436786] 'applied index is now lower than readState.Index'  (duration: 253.476482ms)"],"step_count":2}
	{"level":"info","ts":"2023-05-24T20:07:16.433Z","caller":"traceutil/trace.go:171","msg":"trace[271194167] transaction","detail":"{read_only:false; response_revision:533; number_of_response:1; }","duration":"584.448467ms","start":"2023-05-24T20:07:15.849Z","end":"2023-05-24T20:07:16.433Z","steps":["trace[271194167] 'process raft request'  (duration: 330.784988ms)","trace[271194167] 'compare'  (duration: 252.72879ms)"],"step_count":2}
	{"level":"warn","ts":"2023-05-24T20:07:16.434Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-05-24T20:07:15.849Z","time spent":"584.516467ms","remote":"127.0.0.1:34852","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":677,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-qhkidx6tddeaj3b4st6l25ll3e\" mod_revision:530 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-qhkidx6tddeaj3b4st6l25ll3e\" value_size:604 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-qhkidx6tddeaj3b4st6l25ll3e\" > >"}
	{"level":"warn","ts":"2023-05-24T20:07:16.434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.193395ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-05-24T20:07:16.434Z","caller":"traceutil/trace.go:171","msg":"trace[1941061702] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:533; }","duration":"365.219195ms","start":"2023-05-24T20:07:16.069Z","end":"2023-05-24T20:07:16.434Z","steps":["trace[1941061702] 'agreement among raft nodes before linearized reading'  (duration: 365.137696ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T20:07:16.434Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-05-24T20:07:16.068Z","time spent":"365.259995ms","remote":"127.0.0.1:34792","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-05-24T20:07:17.049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.558439ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3852979355689255545 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.27.136.175\" mod_revision:531 > success:<request_put:<key:\"/registry/masterleases/172.27.136.175\" value_size:67 lease:3852979355689255543 >> failure:<request_range:<key:\"/registry/masterleases/172.27.136.175\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-05-24T20:07:17.049Z","caller":"traceutil/trace.go:171","msg":"trace[1445956124] transaction","detail":"{read_only:false; response_revision:534; number_of_response:1; }","duration":"524.940557ms","start":"2023-05-24T20:07:16.524Z","end":"2023-05-24T20:07:17.049Z","steps":["trace[1445956124] 'process raft request'  (duration: 302.22842ms)","trace[1445956124] 'compare'  (duration: 222.404941ms)"],"step_count":2}
	{"level":"warn","ts":"2023-05-24T20:07:17.050Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-05-24T20:07:16.524Z","time spent":"525.081156ms","remote":"127.0.0.1:34804","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":120,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/172.27.136.175\" mod_revision:531 > success:<request_put:<key:\"/registry/masterleases/172.27.136.175\" value_size:67 lease:3852979355689255543 >> failure:<request_range:<key:\"/registry/masterleases/172.27.136.175\" > >"}
	{"level":"info","ts":"2023-05-24T20:07:21.920Z","caller":"traceutil/trace.go:171","msg":"trace[1482132010] linearizableReadLoop","detail":"{readStateIndex:601; appliedIndex:600; }","duration":"271.985994ms","start":"2023-05-24T20:07:21.648Z","end":"2023-05-24T20:07:21.920Z","steps":["trace[1482132010] 'read index received'  (duration: 271.755097ms)","trace[1482132010] 'applied index is now lower than readState.Index'  (duration: 229.397µs)"],"step_count":2}
	{"level":"warn","ts":"2023-05-24T20:07:21.921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.649386ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" limit:500 ","response":"range_response_count:64 size:41901"}
	{"level":"info","ts":"2023-05-24T20:07:21.921Z","caller":"traceutil/trace.go:171","msg":"trace[850911712] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:64; response_revision:534; }","duration":"272.772385ms","start":"2023-05-24T20:07:21.648Z","end":"2023-05-24T20:07:21.921Z","steps":["trace[850911712] 'agreement among raft nodes before linearized reading'  (duration: 272.180892ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T20:07:22.669Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"601.066645ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-05-24T20:07:22.669Z","caller":"traceutil/trace.go:171","msg":"trace[1177015323] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:534; }","duration":"601.168344ms","start":"2023-05-24T20:07:22.068Z","end":"2023-05-24T20:07:22.669Z","steps":["trace[1177015323] 'range keys from in-memory index tree'  (duration: 600.976646ms)"],"step_count":1}
	{"level":"warn","ts":"2023-05-24T20:07:22.669Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-05-24T20:07:22.068Z","time spent":"601.238543ms","remote":"127.0.0.1:34792","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	
	* 
	* ==> etcd [f47bd4ea62c0] <==
	* {"level":"info","ts":"2023-05-24T20:06:14.524Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-05-24T20:06:14.524Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b55612564fca3578","initial-advertise-peer-urls":["https://172.27.136.175:2380"],"listen-peer-urls":["https://172.27.136.175:2380"],"advertise-client-urls":["https://172.27.136.175:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.27.136.175:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-05-24T20:06:14.524Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-05-24T20:06:14.525Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.27.136.175:2380"}
	{"level":"info","ts":"2023-05-24T20:06:14.525Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.27.136.175:2380"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 is starting a new election at term 2"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 received MsgPreVoteResp from b55612564fca3578 at term 2"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 became candidate at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 received MsgVoteResp from b55612564fca3578 at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b55612564fca3578 became leader at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:15.589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b55612564fca3578 elected leader b55612564fca3578 at term 3"}
	{"level":"info","ts":"2023-05-24T20:06:15.602Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b55612564fca3578","local-member-attributes":"{Name:pause-893100 ClientURLs:[https://172.27.136.175:2379]}","request-path":"/0/members/b55612564fca3578/attributes","cluster-id":"b3c6091156c933b8","publish-timeout":"7s"}
	{"level":"info","ts":"2023-05-24T20:06:15.602Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T20:06:15.602Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-05-24T20:06:15.605Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.27.136.175:2379"}
	{"level":"info","ts":"2023-05-24T20:06:15.605Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-05-24T20:06:15.605Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-05-24T20:06:15.605Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-05-24T20:06:21.145Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-05-24T20:06:21.145Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"pause-893100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.27.136.175:2380"],"advertise-client-urls":["https://172.27.136.175:2379"]}
	{"level":"info","ts":"2023-05-24T20:06:21.149Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b55612564fca3578","current-leader-member-id":"b55612564fca3578"}
	{"level":"info","ts":"2023-05-24T20:06:21.159Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"172.27.136.175:2380"}
	{"level":"info","ts":"2023-05-24T20:06:21.160Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"172.27.136.175:2380"}
	{"level":"info","ts":"2023-05-24T20:06:21.160Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"pause-893100","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://172.27.136.175:2380"],"advertise-client-urls":["https://172.27.136.175:2379"]}
	
	* 
	* ==> kernel <==
	*  20:07:24 up 5 min,  0 users,  load average: 1.31, 0.94, 0.41
	Linux pause-893100 5.10.57 #1 SMP Sat May 20 03:22:25 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [cdc9a8b35153] <==
	* 
	* 
	* ==> kube-apiserver [ef570771b6ad] <==
	* I0524 20:06:34.878876       1 shared_informer.go:318] Caches are synced for configmaps
	I0524 20:06:34.883994       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0524 20:06:34.884328       1 cache.go:39] Caches are synced for autoregister controller
	I0524 20:06:34.886275       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0524 20:06:34.886559       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0524 20:06:34.896878       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0524 20:06:34.920050       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0524 20:06:34.956049       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0524 20:06:35.001773       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0524 20:06:35.641499       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0524 20:06:37.342019       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0524 20:06:37.478342       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0524 20:06:37.691829       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0524 20:06:37.819551       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0524 20:06:37.842520       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0524 20:06:47.941198       1 controller.go:624] quota admission added evaluator for: endpoints
	I0524 20:06:47.960587       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0524 20:07:16.436489       1 trace.go:219] Trace[1789747434]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:4efdf3f3-fe45-42aa-8ff0-5d7e1290f718,client:127.0.0.1,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/apiserver-qhkidx6tddeaj3b4st6l25ll3e,user-agent:kube-apiserver/v1.27.2 (linux/amd64) kubernetes/7f6f68f,verb:PUT (24-May-2023 20:07:15.847) (total time: 589ms):
	Trace[1789747434]: ["GuaranteedUpdate etcd3" audit-id:4efdf3f3-fe45-42aa-8ff0-5d7e1290f718,key:/leases/kube-system/apiserver-qhkidx6tddeaj3b4st6l25ll3e,type:*coordination.Lease,resource:leases.coordination.k8s.io 588ms (20:07:15.847)
	Trace[1789747434]:  ---"Txn call completed" 587ms (20:07:16.436)]
	Trace[1789747434]: [589.300211ms] [589.300211ms] END
	I0524 20:07:17.050821       1 trace.go:219] Trace[1768424817]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.27.136.175,type:*v1.Endpoints,resource:apiServerIPInfo (24-May-2023 20:07:16.440) (total time: 610ms):
	Trace[1768424817]: ---"Transaction prepared" 82ms (20:07:16.524)
	Trace[1768424817]: ---"Txn call completed" 526ms (20:07:17.050)
	Trace[1768424817]: [610.434372ms] [610.434372ms] END
	
	* 
	* ==> kube-controller-manager [5f0b4eeb6d53] <==
	* I0524 20:06:47.948096       1 shared_informer.go:318] Caches are synced for stateful set
	I0524 20:06:47.952440       1 shared_informer.go:318] Caches are synced for deployment
	I0524 20:06:47.953637       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0524 20:06:47.954336       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0524 20:06:47.956827       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0524 20:06:47.965952       1 shared_informer.go:318] Caches are synced for attach detach
	I0524 20:06:47.969974       1 shared_informer.go:318] Caches are synced for cronjob
	I0524 20:06:47.976196       1 shared_informer.go:318] Caches are synced for daemon sets
	I0524 20:06:47.983149       1 shared_informer.go:318] Caches are synced for service account
	I0524 20:06:47.983637       1 shared_informer.go:318] Caches are synced for crt configmap
	I0524 20:06:47.990899       1 shared_informer.go:318] Caches are synced for HPA
	I0524 20:06:47.997887       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0524 20:06:47.998898       1 shared_informer.go:318] Caches are synced for taint
	I0524 20:06:47.999423       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0524 20:06:48.000634       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0524 20:06:48.001058       1 event.go:307] "Event occurred" object="pause-893100" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-893100 event: Registered Node pause-893100 in Controller"
	I0524 20:06:48.001252       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0524 20:06:48.001643       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-893100"
	I0524 20:06:48.002990       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0524 20:06:48.002866       1 taint_manager.go:211] "Sending events to api server"
	I0524 20:06:48.044488       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 20:06:48.114175       1 shared_informer.go:318] Caches are synced for resource quota
	I0524 20:06:48.443393       1 shared_informer.go:318] Caches are synced for garbage collector
	I0524 20:06:48.443521       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0524 20:06:48.507641       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [ebdf7873fb71] <==
	* 
	* 
	* ==> kube-proxy [1fd1529e0065] <==
	* I0524 20:06:36.562799       1 node.go:141] Successfully retrieved node IP: 172.27.136.175
	I0524 20:06:36.563216       1 server_others.go:110] "Detected node IP" address="172.27.136.175"
	I0524 20:06:36.563274       1 server_others.go:551] "Using iptables proxy"
	I0524 20:06:36.662970       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0524 20:06:36.663015       1 server_others.go:190] "Using iptables Proxier"
	I0524 20:06:36.667925       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0524 20:06:36.668648       1 server.go:657] "Version info" version="v1.27.2"
	I0524 20:06:36.668997       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0524 20:06:36.670265       1 config.go:188] "Starting service config controller"
	I0524 20:06:36.670290       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0524 20:06:36.670328       1 config.go:97] "Starting endpoint slice config controller"
	I0524 20:06:36.670335       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0524 20:06:36.674390       1 config.go:315] "Starting node config controller"
	I0524 20:06:36.674562       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0524 20:06:36.771085       1 shared_informer.go:318] Caches are synced for service config
	I0524 20:06:36.771149       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0524 20:06:36.775259       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [e32e5e876d0b] <==
	* E0524 20:06:18.003611       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-893100": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:19.140396       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-893100": dial tcp 172.27.136.175:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [8100a9e6f30a] <==
	* W0524 20:06:34.774254       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0524 20:06:34.775842       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0524 20:06:34.774724       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0524 20:06:34.780426       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0524 20:06:34.775019       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0524 20:06:34.781027       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0524 20:06:34.781288       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0524 20:06:34.784201       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0524 20:06:34.775356       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0524 20:06:34.775442       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0524 20:06:34.775582       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0524 20:06:34.775728       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0524 20:06:34.775765       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0524 20:06:34.775817       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0524 20:06:34.780361       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0524 20:06:34.775238       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 20:06:34.786100       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0524 20:06:34.786130       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0524 20:06:34.786228       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0524 20:06:34.786431       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0524 20:06:34.786455       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0524 20:06:34.786469       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0524 20:06:34.786801       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0524 20:06:34.787033       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0524 20:06:36.441574       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [d4b4d742aac0] <==
	* W0524 20:06:05.363879       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://172.27.136.175:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.364088       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.27.136.175:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.426924       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://172.27.136.175:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.427109       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.27.136.175:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.557879       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://172.27.136.175:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.557927       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://172.27.136.175:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.562781       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://172.27.136.175:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.562831       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://172.27.136.175:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.635809       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.27.136.175:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.635852       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.27.136.175:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.649969       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://172.27.136.175:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.650011       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://172.27.136.175:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.677115       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://172.27.136.175:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.677305       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://172.27.136.175:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.758963       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://172.27.136.175:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.759110       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://172.27.136.175:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.811868       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.27.136.175:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.812552       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.27.136.175:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.832382       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://172.27.136.175:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.832431       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://172.27.136.175:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	W0524 20:06:05.834905       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://172.27.136.175:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	E0524 20:06:05.834938       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://172.27.136.175:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.27.136.175:8443: connect: connection refused
	I0524 20:06:06.096040       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0524 20:06:06.096136       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0524 20:06:06.096258       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-05-24 20:02:08 UTC, ends at Wed 2023-05-24 20:07:30 UTC. --
	May 24 20:06:29 pause-893100 kubelet[8350]: I0524 20:06:29.913770    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3705417d0b6bda4e87fe0a1802e2b07c-etcd-certs\") pod \"etcd-pause-893100\" (UID: \"3705417d0b6bda4e87fe0a1802e2b07c\") " pod="kube-system/etcd-pause-893100"
	May 24 20:06:29 pause-893100 kubelet[8350]: I0524 20:06:29.913921    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abc26da9d5f336247c64c38451a14d81-usr-share-ca-certificates\") pod \"kube-apiserver-pause-893100\" (UID: \"abc26da9d5f336247c64c38451a14d81\") " pod="kube-system/kube-apiserver-pause-893100"
	May 24 20:06:30 pause-893100 kubelet[8350]: I0524 20:06:30.111277    8350 scope.go:115] "RemoveContainer" containerID="ebdf7873fb71aff4f7c65dc81071922f15ffbc8270ff6440ff5d698c81e290da"
	May 24 20:06:30 pause-893100 kubelet[8350]: I0524 20:06:30.159945    8350 scope.go:115] "RemoveContainer" containerID="f47bd4ea62c004a584ab9f2a845ca4e8f17c742f6a34641598f6d5bab2691022"
	May 24 20:06:34 pause-893100 kubelet[8350]: I0524 20:06:34.911261    8350 kubelet_node_status.go:108] "Node was previously registered" node="pause-893100"
	May 24 20:06:34 pause-893100 kubelet[8350]: I0524 20:06:34.911510    8350 kubelet_node_status.go:73] "Successfully registered node" node="pause-893100"
	May 24 20:06:34 pause-893100 kubelet[8350]: I0524 20:06:34.915851    8350 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 24 20:06:34 pause-893100 kubelet[8350]: I0524 20:06:34.917428    8350 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.367546    8350 apiserver.go:52] "Watching apiserver"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.371989    8350 topology_manager.go:212] "Topology Admit Handler"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.372368    8350 topology_manager.go:212] "Topology Admit Handler"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.403588    8350 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.484846    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64-config-volume\") pod \"coredns-5d78c9869d-ngwxf\" (UID: \"5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64\") " pod="kube-system/coredns-5d78c9869d-ngwxf"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.484965    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5jgh\" (UniqueName: \"kubernetes.io/projected/5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64-kube-api-access-b5jgh\") pod \"coredns-5d78c9869d-ngwxf\" (UID: \"5ea7b1b1-c927-4e3d-a5c9-d5e7bca77f64\") " pod="kube-system/coredns-5d78c9869d-ngwxf"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.485026    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4372194d-1a11-4f50-97a2-a9b8863e1d2e-xtables-lock\") pod \"kube-proxy-c5vrt\" (UID: \"4372194d-1a11-4f50-97a2-a9b8863e1d2e\") " pod="kube-system/kube-proxy-c5vrt"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.485175    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4372194d-1a11-4f50-97a2-a9b8863e1d2e-lib-modules\") pod \"kube-proxy-c5vrt\" (UID: \"4372194d-1a11-4f50-97a2-a9b8863e1d2e\") " pod="kube-system/kube-proxy-c5vrt"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.485251    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4372194d-1a11-4f50-97a2-a9b8863e1d2e-kube-proxy\") pod \"kube-proxy-c5vrt\" (UID: \"4372194d-1a11-4f50-97a2-a9b8863e1d2e\") " pod="kube-system/kube-proxy-c5vrt"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.485337    8350 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z52zp\" (UniqueName: \"kubernetes.io/projected/4372194d-1a11-4f50-97a2-a9b8863e1d2e-kube-api-access-z52zp\") pod \"kube-proxy-c5vrt\" (UID: \"4372194d-1a11-4f50-97a2-a9b8863e1d2e\") " pod="kube-system/kube-proxy-c5vrt"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.485385    8350 reconciler.go:41] "Reconciler: start to sync state"
	May 24 20:06:35 pause-893100 kubelet[8350]: I0524 20:06:35.676263    8350 scope.go:115] "RemoveContainer" containerID="e32e5e876d0b41955df703aac3179f7a3b7e88f1a123d54202e89af358a31ba4"
	May 24 20:06:37 pause-893100 kubelet[8350]: I0524 20:06:37.935316    8350 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4552fc2992751f81cdc3c57cde81cde90bedb5779501d12180b8f47e264dcc73"
	May 24 20:07:29 pause-893100 kubelet[8350]: E0524 20:07:29.593915    8350 iptables.go:575] "Could not set up iptables canary" err=<
	May 24 20:07:29 pause-893100 kubelet[8350]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 24 20:07:29 pause-893100 kubelet[8350]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 24 20:07:29 pause-893100 kubelet[8350]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-893100 -n pause-893100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-893100 -n pause-893100: (5.9259461s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-893100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (227.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (405.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.1024265669.exe start -p stopped-upgrade-998200 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:195: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.1024265669.exe start -p stopped-upgrade-998200 --memory=2200 --vm-driver=hyperv: (3m20.4001779s)
version_upgrade_test.go:204: (dbg) Run:  C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.1024265669.exe -p stopped-upgrade-998200 stop
version_upgrade_test.go:204: (dbg) Done: C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube-v1.6.2.1024265669.exe -p stopped-upgrade-998200 stop: (19.678812s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-998200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0524 20:12:16.682171    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p stopped-upgrade-998200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (3m5.6802675s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-998200] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the hyperv driver based on existing profile
	* Starting control plane node stopped-upgrade-998200 in cluster stopped-upgrade-998200
	* Restarting existing hyperv VM for "stopped-upgrade-998200" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 20:12:03.485373   10976 out.go:296] Setting OutFile to fd 728 ...
	I0524 20:12:03.551049   10976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:12:03.551743   10976 out.go:309] Setting ErrFile to fd 1944...
	I0524 20:12:03.551893   10976 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 20:12:03.579859   10976 out.go:303] Setting JSON to false
	I0524 20:12:03.583441   10976 start.go:125] hostinfo: {"hostname":"minikube1","uptime":7636,"bootTime":1684951486,"procs":160,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 20:12:03.583441   10976 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 20:12:03.704519   10976 out.go:177] * [stopped-upgrade-998200] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 20:12:03.715961   10976 notify.go:220] Checking for updates...
	I0524 20:12:03.800136   10976 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 20:12:03.894297   10976 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 20:12:04.141098   10976 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 20:12:04.281441   10976 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 20:12:04.477465   10976 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 20:12:04.502248   10976 config.go:182] Loaded profile config "stopped-upgrade-998200": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0524 20:12:04.502248   10976 start_flags.go:683] config upgrade: Driver=hyperv
	I0524 20:12:04.502248   10976 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de
	I0524 20:12:04.502248   10976 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-998200\config.json ...
	I0524 20:12:04.597754   10976 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0524 20:12:04.693352   10976 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 20:12:06.736057   10976 out.go:177] * Using the hyperv driver based on existing profile
	I0524 20:12:06.749877   10976 start.go:295] selected driver: hyperv
	I0524 20:12:06.749966   10976 start.go:870] validating driver "hyperv" against &{Name:stopped-upgrade-998200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.27.142.139 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
}
	I0524 20:12:06.750376   10976 start.go:881] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 20:12:06.803607   10976 cni.go:84] Creating CNI manager for ""
	I0524 20:12:06.803607   10976 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 20:12:06.803607   10976 start_flags.go:319] config:
	{Name:stopped-upgrade-998200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.27.142.139 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 20:12:06.804081   10976 iso.go:125] acquiring lock: {Name:mk3b29db369ab0f922ac5eeb788beee87e18ec94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:12:06.941608   10976 out.go:177] * Starting control plane node stopped-upgrade-998200 in cluster stopped-upgrade-998200
	I0524 20:12:06.985192   10976 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0524 20:12:07.027014   10976 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0524 20:12:07.028154   10976 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-998200\config.json ...
	I0524 20:12:07.028304   10976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0524 20:12:07.028304   10976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0524 20:12:07.028416   10976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0524 20:12:07.028416   10976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0524 20:12:07.028416   10976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0524 20:12:07.028304   10976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0524 20:12:07.028304   10976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0524 20:12:07.028541   10976 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I0524 20:12:07.031586   10976 cache.go:195] Successfully downloaded all kic artifacts
	I0524 20:12:07.031680   10976 start.go:364] acquiring machines lock for stopped-upgrade-998200: {Name:mk1756dfe9622c208593bf9b718faf6c1651aea2 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0524 20:12:07.239049   10976 cache.go:107] acquiring lock: {Name:mk69342e4f48cfcf5669830048d73215a892bfa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:12:07.239049   10976 cache.go:107] acquiring lock: {Name:mka7be082bbc64a256cc388eda31b6c9edba386f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:12:07.239049   10976 cache.go:107] acquiring lock: {Name:mk7a50c4bf2c20bec1fff9de3ac74780139c1c4b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:12:07.239049   10976 cache.go:107] acquiring lock: {Name:mkcd99a49ef11cbbf53d95904dadb7eadb7e30f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:12:07.239049   10976 cache.go:107] acquiring lock: {Name:mkbbc88bc55edd0ef8bd1c53673fe74e0129caa1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:12:07.239049   10976 cache.go:107] acquiring lock: {Name:mk67b634fe9a890edc5195da54a2f3093e0c8f30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:12:07.239049   10976 cache.go:107] acquiring lock: {Name:mkf253ced278c18e0b579f9f5e07f6a2fe7db678 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:12:07.239049   10976 cache.go:107] acquiring lock: {Name:mk4e8ee16ba5b475b341c78282e92381b8584a70 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 20:12:07.239049   10976 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I0524 20:12:07.239049   10976 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I0524 20:12:07.239049   10976 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I0524 20:12:07.239049   10976 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I0524 20:12:07.239049   10976 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 210.6339ms
	I0524 20:12:07.239049   10976 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I0524 20:12:07.239049   10976 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I0524 20:12:07.239049   10976 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I0524 20:12:07.239049   10976 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0524 20:12:07.239049   10976 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 210.5082ms
	I0524 20:12:07.239049   10976 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I0524 20:12:07.239049   10976 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 210.2598ms
	I0524 20:12:07.239049   10976 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I0524 20:12:07.239589   10976 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I0524 20:12:07.239049   10976 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 210.6339ms
	I0524 20:12:07.239589   10976 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I0524 20:12:07.239589   10976 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 211.0478ms
	I0524 20:12:07.239679   10976 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 211.2637ms
	I0524 20:12:07.239752   10976 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0524 20:12:07.239049   10976 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 210.5082ms
	I0524 20:12:07.239752   10976 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I0524 20:12:07.239589   10976 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 211.2848ms
	I0524 20:12:07.239854   10976 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I0524 20:12:07.239752   10976 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I0524 20:12:07.239854   10976 cache.go:87] Successfully saved all images to host disk.
	I0524 20:13:48.536951   10976 start.go:368] acquired machines lock for "stopped-upgrade-998200" in 1m41.5053217s
	I0524 20:13:48.536951   10976 start.go:96] Skipping create...Using existing machine configuration
	I0524 20:13:48.538249   10976 fix.go:55] fixHost starting: minikube
	I0524 20:13:48.539056   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:13:49.276998   10976 main.go:141] libmachine: [stdout =====>] : Off
	
	I0524 20:13:49.277118   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:13:49.277196   10976 fix.go:103] recreateIfNeeded on stopped-upgrade-998200: state=Stopped err=<nil>
	W0524 20:13:49.277240   10976 fix.go:129] unexpected machine state, will restart: <nil>
	I0524 20:13:49.281050   10976 out.go:177] * Restarting existing hyperv VM for "stopped-upgrade-998200" ...
	I0524 20:13:49.288597   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM stopped-upgrade-998200
	I0524 20:13:51.005910   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:13:51.006018   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:13:51.006018   10976 main.go:141] libmachine: Waiting for host to start...
	I0524 20:13:51.006018   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:13:51.829980   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:13:51.830344   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:13:51.830344   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:13:52.946622   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:13:52.946622   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:13:53.948573   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:13:54.846570   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:13:54.846721   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:13:54.846721   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:13:56.069033   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:13:56.069033   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:13:57.077433   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:13:57.853689   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:13:57.853819   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:13:57.853874   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:13:58.924571   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:13:58.924629   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:13:59.927956   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:00.719169   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:00.719169   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:00.719424   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:01.854977   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:01.855293   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:02.869385   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:03.645553   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:03.645553   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:03.645553   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:04.738394   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:04.738557   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:05.747376   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:06.539782   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:06.539948   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:06.539948   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:07.641733   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:07.641733   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:08.646898   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:09.403731   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:09.403731   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:09.404013   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:10.498768   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:10.498796   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:11.513360   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:12.326047   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:12.326428   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:12.326487   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:13.418912   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:13.419013   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:14.422928   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:15.199843   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:15.200000   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:15.200067   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:16.280047   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:16.280047   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:17.294430   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:18.087909   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:18.088494   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:18.088494   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:19.165562   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:19.165817   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:20.179303   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:21.017853   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:21.017853   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:21.017853   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:22.171083   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:22.171318   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:23.186588   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:23.986388   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:23.986388   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:23.986705   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:25.064135   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:25.064135   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:26.077226   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:26.866701   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:26.866701   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:26.866701   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:27.961363   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:27.961363   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:28.976184   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:29.771636   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:29.771784   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:29.771784   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:30.841488   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:30.841488   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:31.849523   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:32.636404   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:32.636404   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:32.636404   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:33.761059   10976 main.go:141] libmachine: [stdout =====>] : 
	I0524 20:14:33.761059   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:34.770040   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:35.610103   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:35.610136   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:35.610136   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:37.058261   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:37.058478   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:37.060500   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:37.884810   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:37.884810   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:37.884916   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:39.084474   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:39.084474   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:39.084922   10976 profile.go:148] Saving config to C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\stopped-upgrade-998200\config.json ...
	I0524 20:14:39.087523   10976 machine.go:88] provisioning docker machine ...
	I0524 20:14:39.087523   10976 buildroot.go:166] provisioning hostname "stopped-upgrade-998200"
	I0524 20:14:39.087523   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:39.879770   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:39.879770   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:39.879770   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:41.066407   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:41.066477   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:41.069841   10976 main.go:141] libmachine: Using SSH client type: native
	I0524 20:14:41.071278   10976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.139 22 <nil> <nil>}
	I0524 20:14:41.071278   10976 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-998200 && echo "stopped-upgrade-998200" | sudo tee /etc/hostname
	I0524 20:14:41.229892   10976 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-998200
	
	I0524 20:14:41.229892   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:42.061920   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:42.061920   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:42.062036   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:43.278310   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:43.278459   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:43.282722   10976 main.go:141] libmachine: Using SSH client type: native
	I0524 20:14:43.284000   10976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.139 22 <nil> <nil>}
	I0524 20:14:43.284068   10976 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-998200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-998200/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-998200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0524 20:14:43.416638   10976 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0524 20:14:43.416693   10976 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube1\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube1\minikube-integration\.minikube}
	I0524 20:14:43.416759   10976 buildroot.go:174] setting up certificates
	I0524 20:14:43.416824   10976 provision.go:83] configureAuth start
	I0524 20:14:43.416863   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:44.243413   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:44.243534   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:44.243589   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:45.428698   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:45.428698   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:45.428698   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:46.250389   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:46.250389   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:46.250449   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:47.402102   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:47.402102   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:47.402214   10976 provision.go:138] copyHostCerts
	I0524 20:14:47.402484   10976 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem, removing ...
	I0524 20:14:47.402484   10976 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.pem
	I0524 20:14:47.402484   10976 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0524 20:14:47.404242   10976 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem, removing ...
	I0524 20:14:47.404242   10976 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\cert.pem
	I0524 20:14:47.404732   10976 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0524 20:14:47.406164   10976 exec_runner.go:144] found C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem, removing ...
	I0524 20:14:47.406384   10976 exec_runner.go:203] rm: C:\Users\jenkins.minikube1\minikube-integration\.minikube\key.pem
	I0524 20:14:47.406689   10976 exec_runner.go:151] cp: C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube1\minikube-integration\.minikube/key.pem (1679 bytes)
	I0524 20:14:47.406689   10976 provision.go:112] generating server cert: C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.stopped-upgrade-998200 san=[172.27.142.139 172.27.142.139 localhost 127.0.0.1 minikube stopped-upgrade-998200]
	I0524 20:14:47.559723   10976 provision.go:172] copyRemoteCerts
	I0524 20:14:47.568781   10976 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0524 20:14:47.568781   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:48.368865   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:48.368865   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:48.368865   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:49.591399   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:49.591399   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:49.591974   10976 sshutil.go:53] new ssh client: &{IP:172.27.142.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-998200\id_rsa Username:docker}
	I0524 20:14:49.695248   10976 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.1263927s)
	I0524 20:14:49.695248   10976 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0524 20:14:49.716321   10976 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0524 20:14:49.737011   10976 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0524 20:14:49.756035   10976 provision.go:86] duration metric: configureAuth took 6.3392139s
	I0524 20:14:49.756035   10976 buildroot.go:189] setting minikube options for container-runtime
	I0524 20:14:49.756846   10976 config.go:182] Loaded profile config "stopped-upgrade-998200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0524 20:14:49.757129   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:50.542663   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:50.542663   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:50.542766   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:51.696072   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:51.696386   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:51.701388   10976 main.go:141] libmachine: Using SSH client type: native
	I0524 20:14:51.702304   10976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.139 22 <nil> <nil>}
	I0524 20:14:51.702304   10976 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0524 20:14:51.828252   10976 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0524 20:14:51.828252   10976 buildroot.go:70] root file system type: tmpfs
	I0524 20:14:51.828519   10976 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0524 20:14:51.828585   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:52.607865   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:52.607948   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:52.608005   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:53.760678   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:53.760678   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:53.765158   10976 main.go:141] libmachine: Using SSH client type: native
	I0524 20:14:53.766435   10976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.139 22 <nil> <nil>}
	I0524 20:14:53.766584   10976 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0524 20:14:53.904669   10976 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0524 20:14:53.904744   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:54.696036   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:54.696036   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:54.696036   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:55.825253   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:55.825535   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:55.829779   10976 main.go:141] libmachine: Using SSH client type: native
	I0524 20:14:55.830820   10976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.139 22 <nil> <nil>}
	I0524 20:14:55.830820   10976 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0524 20:14:57.282856   10976 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0524 20:14:57.282945   10976 machine.go:91] provisioned docker machine in 18.1953431s
	I0524 20:14:57.282945   10976 start.go:300] post-start starting for "stopped-upgrade-998200" (driver="hyperv")
	I0524 20:14:57.282998   10976 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0524 20:14:57.294297   10976 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0524 20:14:57.294297   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:14:58.066107   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:14:58.066267   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:58.066343   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:14:59.255103   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:14:59.255137   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:14:59.255635   10976 sshutil.go:53] new ssh client: &{IP:172.27.142.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-998200\id_rsa Username:docker}
	I0524 20:14:59.360855   10976 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.0665595s)
	I0524 20:14:59.371342   10976 ssh_runner.go:195] Run: cat /etc/os-release
	I0524 20:14:59.377768   10976 info.go:137] Remote host: Buildroot 2019.02.7
	I0524 20:14:59.377768   10976 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\addons for local assets ...
	I0524 20:14:59.377768   10976 filesync.go:126] Scanning C:\Users\jenkins.minikube1\minikube-integration\.minikube\files for local assets ...
	I0524 20:14:59.379216   10976 filesync.go:149] local asset: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem -> 65602.pem in /etc/ssl/certs
	I0524 20:14:59.390050   10976 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0524 20:14:59.398679   10976 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\ssl\certs\65602.pem --> /etc/ssl/certs/65602.pem (1708 bytes)
	I0524 20:14:59.418796   10976 start.go:303] post-start completed in 2.1357045s
	I0524 20:14:59.418864   10976 fix.go:57] fixHost completed within 1m10.8819504s
	I0524 20:14:59.418864   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:15:00.233527   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:15:00.233729   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:15:00.233729   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:15:01.363235   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:15:01.363235   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:15:01.366572   10976 main.go:141] libmachine: Using SSH client type: native
	I0524 20:15:01.368081   10976 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf58500] 0xf5b3a0 <nil>  [] 0s} 172.27.142.139 22 <nil> <nil>}
	I0524 20:15:01.368081   10976 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0524 20:15:01.510402   10976 main.go:141] libmachine: SSH cmd err, output: <nil>: 1684959301.502163973
	
	I0524 20:15:01.510946   10976 fix.go:207] guest clock: 1684959301.502163973
	I0524 20:15:01.510946   10976 fix.go:220] Guest: 2023-05-24 20:15:01.502163973 +0000 UTC Remote: 2023-05-24 20:14:59.4188647 +0000 UTC m=+176.006940901 (delta=2.083299273s)
	I0524 20:15:01.510946   10976 fix.go:191] guest clock delta is within tolerance: 2.083299273s
	I0524 20:15:01.510946   10976 start.go:83] releasing machines lock for "stopped-upgrade-998200", held for 1m12.9740333s
	I0524 20:15:01.511209   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:15:02.341097   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:15:02.341687   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:15:02.342212   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:15:03.543781   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:15:03.543781   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:15:03.547873   10976 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0524 20:15:03.547873   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:15:03.561761   10976 ssh_runner.go:195] Run: cat /version.json
	I0524 20:15:03.937209   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-998200 ).state
	I0524 20:15:04.388985   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:15:04.388985   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:15:04.389173   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:15:04.784699   10976 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 20:15:04.784699   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:15:04.784908   10976 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-998200 ).networkadapters[0]).ipaddresses[0]
	I0524 20:15:05.548714   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:15:05.548714   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:15:05.548714   10976 sshutil.go:53] new ssh client: &{IP:172.27.142.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-998200\id_rsa Username:docker}
	I0524 20:15:05.719778   10976 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1719071s)
	I0524 20:15:05.988104   10976 main.go:141] libmachine: [stdout =====>] : 172.27.142.139
	
	I0524 20:15:05.988104   10976 main.go:141] libmachine: [stderr =====>] : 
	I0524 20:15:05.988545   10976 sshutil.go:53] new ssh client: &{IP:172.27.142.139 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\stopped-upgrade-998200\id_rsa Username:docker}
	I0524 20:15:06.089268   10976 ssh_runner.go:235] Completed: cat /version.json: (2.1526441s)
	W0524 20:15:06.089621   10976 start.go:409] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0524 20:15:06.100661   10976 ssh_runner.go:195] Run: systemctl --version
	I0524 20:15:06.119092   10976 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0524 20:15:06.127135   10976 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0524 20:15:06.137365   10976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0524 20:15:06.156991   10976 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0524 20:15:06.169210   10976 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0524 20:15:06.169210   10976 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I0524 20:15:06.169210   10976 start.go:481] detecting cgroup driver to use...
	I0524 20:15:06.169482   10976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:15:06.197446   10976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0524 20:15:06.227167   10976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0524 20:15:06.236787   10976 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0524 20:15:06.247945   10976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0524 20:15:06.264962   10976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:15:06.286914   10976 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0524 20:15:06.308167   10976 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0524 20:15:06.330965   10976 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0524 20:15:06.352621   10976 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0524 20:15:06.374048   10976 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0524 20:15:06.391652   10976 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0524 20:15:06.409295   10976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:15:06.538963   10976 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0524 20:15:06.570919   10976 start.go:481] detecting cgroup driver to use...
	I0524 20:15:06.581927   10976 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0524 20:15:06.604936   10976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:15:06.629918   10976 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0524 20:15:06.697788   10976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0524 20:15:06.723482   10976 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0524 20:15:06.742497   10976 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0524 20:15:06.770497   10976 ssh_runner.go:195] Run: which cri-dockerd
	I0524 20:15:06.787612   10976 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0524 20:15:06.796761   10976 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0524 20:15:06.823233   10976 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0524 20:15:06.971321   10976 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0524 20:15:07.086431   10976 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0524 20:15:07.086431   10976 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0524 20:15:07.114006   10976 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0524 20:15:07.241673   10976 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0524 20:15:08.341700   10976 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1000269s)
	I0524 20:15:08.491407   10976 out.go:177] 
	W0524 20:15:08.659980   10976 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0524 20:15:08.787655   10976 out.go:239] * 
	* 
	W0524 20:15:08.787655   10976 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0524 20:15:08.855497   10976 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p stopped-upgrade-998200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (405.90s)

                                                
                                    

Test pass (265/300)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.3
4 TestDownloadOnly/v1.16.0/preload-exists 0.07
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.31
10 TestDownloadOnly/v1.27.2/json-events 8.08
11 TestDownloadOnly/v1.27.2/preload-exists 0
14 TestDownloadOnly/v1.27.2/kubectl 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 1.57
17 TestDownloadOnly/DeleteAlwaysSucceeds 1.71
19 TestBinaryMirror 3.67
20 TestOffline 274.28
22 TestAddons/Setup 301
24 TestAddons/parallel/Registry 22.08
25 TestAddons/parallel/Ingress 45.27
26 TestAddons/parallel/InspektorGadget 14.77
27 TestAddons/parallel/MetricsServer 9.77
28 TestAddons/parallel/HelmTiller 19.53
30 TestAddons/parallel/CSI 57.45
31 TestAddons/parallel/Headlamp 27.96
32 TestAddons/parallel/CloudSpanner 8.9
35 TestAddons/serial/GCPAuth/Namespaces 0.47
36 TestAddons/StoppedEnableDisable 29.39
37 TestCertOptions 229.49
38 TestCertExpiration 655.22
39 TestDockerFlags 232.93
40 TestForceSystemdFlag 208.8
41 TestForceSystemdEnv 211.87
46 TestErrorSpam/setup 119.74
47 TestErrorSpam/start 6.08
48 TestErrorSpam/status 15.32
49 TestErrorSpam/pause 10.27
50 TestErrorSpam/unpause 10.36
51 TestErrorSpam/stop 30.7
54 TestFunctional/serial/CopySyncFile 0.03
55 TestFunctional/serial/StartWithProxy 133.73
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 70.59
58 TestFunctional/serial/KubeContext 0.19
59 TestFunctional/serial/KubectlGetPods 0.3
62 TestFunctional/serial/CacheCmd/cache/add_remote 14.76
63 TestFunctional/serial/CacheCmd/cache/add_local 6.09
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.25
65 TestFunctional/serial/CacheCmd/cache/list 0.25
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 3.85
67 TestFunctional/serial/CacheCmd/cache/cache_reload 16.04
68 TestFunctional/serial/CacheCmd/cache/delete 0.5
69 TestFunctional/serial/MinikubeKubectlCmd 0.52
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.42
71 TestFunctional/serial/ExtraConfig 78.27
72 TestFunctional/serial/ComponentHealth 0.24
73 TestFunctional/serial/LogsCmd 4.4
74 TestFunctional/serial/LogsFileCmd 5.26
76 TestFunctional/parallel/ConfigCmd 1.63
78 TestFunctional/parallel/DryRun 4.43
79 TestFunctional/parallel/InternationalLanguage 2.21
80 TestFunctional/parallel/StatusCmd 16.58
84 TestFunctional/parallel/ServiceCmdConnect 28.39
85 TestFunctional/parallel/AddonsCmd 0.76
86 TestFunctional/parallel/PersistentVolumeClaim 41.65
88 TestFunctional/parallel/SSHCmd 8.77
89 TestFunctional/parallel/CpCmd 17.61
90 TestFunctional/parallel/MySQL 56.83
91 TestFunctional/parallel/FileSync 4.37
92 TestFunctional/parallel/CertSync 26.21
96 TestFunctional/parallel/NodeLabels 0.29
98 TestFunctional/parallel/NonActiveRuntimeDisabled 4.09
100 TestFunctional/parallel/License 3.04
101 TestFunctional/parallel/DockerEnv/powershell 19.04
102 TestFunctional/parallel/UpdateContextCmd/no_changes 1.13
103 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 1.15
104 TestFunctional/parallel/UpdateContextCmd/no_clusters 1.07
105 TestFunctional/parallel/ImageCommands/ImageListShort 3.32
106 TestFunctional/parallel/ImageCommands/ImageListTable 3.23
107 TestFunctional/parallel/ImageCommands/ImageListJson 3.18
108 TestFunctional/parallel/ImageCommands/ImageListYaml 3.2
109 TestFunctional/parallel/ImageCommands/ImageBuild 14.27
110 TestFunctional/parallel/ImageCommands/Setup 2.92
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 16.8
112 TestFunctional/parallel/Version/short 0.27
113 TestFunctional/parallel/Version/components 3.81
114 TestFunctional/parallel/ProfileCmd/profile_not_create 4.09
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 4.29
117 TestFunctional/parallel/ProfileCmd/profile_list 4.07
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.58
121 TestFunctional/parallel/ProfileCmd/profile_json_output 3.9
122 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 10.5
123 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 16.76
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/ServiceCmd/DeployApp 12.6
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.23
132 TestFunctional/parallel/ImageCommands/ImageRemove 6.43
133 TestFunctional/parallel/ServiceCmd/List 5.91
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 8.75
135 TestFunctional/parallel/ServiceCmd/JSONOutput 6.11
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 8.64
137 TestFunctional/parallel/ServiceCmd/HTTPS 7.71
138 TestFunctional/parallel/ServiceCmd/Format 7.53
139 TestFunctional/parallel/ServiceCmd/URL 7.39
140 TestFunctional/delete_addon-resizer_images 0.65
141 TestFunctional/delete_my-image_image 0.19
142 TestFunctional/delete_minikube_cached_images 0.2
146 TestImageBuild/serial/Setup 121.61
147 TestImageBuild/serial/NormalBuild 5.11
148 TestImageBuild/serial/BuildWithBuildArg 6.01
149 TestImageBuild/serial/BuildWithDockerIgnore 3.55
150 TestImageBuild/serial/BuildWithSpecifiedDockerfile 3.32
153 TestIngressAddonLegacy/StartLegacyK8sCluster 159.53
155 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 26.44
156 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 3.37
157 TestIngressAddonLegacy/serial/ValidateIngressAddons 50.07
160 TestJSONOutput/start/Command 132.89
161 TestJSONOutput/start/Audit 0
163 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/pause/Command 3.69
167 TestJSONOutput/pause/Audit 0
169 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/unpause/Command 3.59
173 TestJSONOutput/unpause/Audit 0
175 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/stop/Command 24.46
179 TestJSONOutput/stop/Audit 0
181 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
183 TestErrorJSONOutput 1.59
188 TestMainNoArgs 0.23
189 TestMinikubeProfile 320.59
192 TestMountStart/serial/StartWithMountFirst 80.94
193 TestMountStart/serial/VerifyMountFirst 3.89
194 TestMountStart/serial/StartWithMountSecond 81.02
195 TestMountStart/serial/VerifyMountSecond 3.78
196 TestMountStart/serial/DeleteFirst 13.32
197 TestMountStart/serial/VerifyMountPostDelete 3.95
198 TestMountStart/serial/Stop 11.87
199 TestMountStart/serial/RestartStopped 63.76
200 TestMountStart/serial/VerifyMountPostStop 3.91
203 TestMultiNode/serial/FreshStart2Nodes 270.59
204 TestMultiNode/serial/DeployApp2Nodes 9.47
206 TestMultiNode/serial/AddNode 134.63
207 TestMultiNode/serial/ProfileList 3.27
208 TestMultiNode/serial/CopyFile 146.78
209 TestMultiNode/serial/StopNode 33.01
210 TestMultiNode/serial/StartAfterStop 95.12
212 TestMultiNode/serial/DeleteNode 30.27
213 TestMultiNode/serial/StopMultiNode 49.08
214 TestMultiNode/serial/RestartMultiNode 196.38
215 TestMultiNode/serial/ValidateNameConflict 165.03
219 TestPreload 353.49
220 TestScheduledStopWindows 217.32
227 TestKubernetesUpgrade 773.78
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.34
239 TestPause/serial/Start 144.25
253 TestStoppedBinaryUpgrade/Setup 1.13
255 TestStoppedBinaryUpgrade/MinikubeLogs 6.97
257 TestStartStop/group/old-k8s-version/serial/FirstStart 346.89
259 TestStartStop/group/no-preload/serial/FirstStart 218.09
261 TestStartStop/group/embed-certs/serial/FirstStart 180.77
263 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 240.75
264 TestStartStop/group/old-k8s-version/serial/DeployApp 11.18
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 4.46
266 TestStartStop/group/old-k8s-version/serial/Stop 25.97
267 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 2.62
268 TestStartStop/group/old-k8s-version/serial/SecondStart 571.81
269 TestStartStop/group/no-preload/serial/DeployApp 10.01
270 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 4.82
271 TestStartStop/group/no-preload/serial/Stop 26.73
272 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 2.58
273 TestStartStop/group/embed-certs/serial/DeployApp 24.54
274 TestStartStop/group/no-preload/serial/SecondStart 450.77
275 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 10.51
276 TestStartStop/group/embed-certs/serial/Stop 26.37
277 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 2.97
278 TestStartStop/group/embed-certs/serial/SecondStart 423.44
279 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 19.17
280 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 8.72
281 TestStartStop/group/default-k8s-diff-port/serial/Stop 25.85
282 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 2.65
283 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 415.25
284 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.04
285 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.5
286 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 4.04
287 TestStartStop/group/no-preload/serial/Pause 29.28
288 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.04
289 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.06
290 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.45
291 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.53
292 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 4.24
293 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 4.29
294 TestStartStop/group/embed-certs/serial/Pause 30.67
295 TestStartStop/group/old-k8s-version/serial/Pause 31.8
297 TestStartStop/group/newest-cni/serial/FirstStart 148.46
298 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.08
299 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.04
300 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 4.49
301 TestStartStop/group/default-k8s-diff-port/serial/Pause 32.14
302 TestNetworkPlugins/group/auto/Start 180.17
303 TestNetworkPlugins/group/kindnet/Start 258.42
304 TestNetworkPlugins/group/calico/Start 364.25
305 TestStartStop/group/newest-cni/serial/DeployApp 0
306 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 4.32
307 TestStartStop/group/newest-cni/serial/Stop 31.53
308 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 2.73
309 TestStartStop/group/newest-cni/serial/SecondStart 259.73
310 TestNetworkPlugins/group/auto/KubeletFlags 3.97
311 TestNetworkPlugins/group/auto/NetCatPod 15.68
312 TestNetworkPlugins/group/auto/DNS 0.44
313 TestNetworkPlugins/group/auto/Localhost 0.41
314 TestNetworkPlugins/group/auto/HairPin 0.43
315 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
316 TestNetworkPlugins/group/kindnet/KubeletFlags 4.13
317 TestNetworkPlugins/group/kindnet/NetCatPod 27.7
318 TestNetworkPlugins/group/kindnet/DNS 0.48
319 TestNetworkPlugins/group/kindnet/Localhost 0.45
320 TestNetworkPlugins/group/kindnet/HairPin 0.47
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 4.66
324 TestNetworkPlugins/group/custom-flannel/Start 166.4
325 TestStartStop/group/newest-cni/serial/Pause 33.49
326 TestNetworkPlugins/group/calico/ControllerPod 5.07
327 TestNetworkPlugins/group/calico/KubeletFlags 4.97
328 TestNetworkPlugins/group/calico/NetCatPod 16.86
329 TestNetworkPlugins/group/calico/DNS 0.51
330 TestNetworkPlugins/group/calico/Localhost 0.47
331 TestNetworkPlugins/group/calico/HairPin 0.46
332 TestNetworkPlugins/group/false/Start 178.09
333 TestNetworkPlugins/group/enable-default-cni/Start 211.46
334 TestNetworkPlugins/group/custom-flannel/KubeletFlags 4.2
335 TestNetworkPlugins/group/custom-flannel/NetCatPod 17.75
336 TestNetworkPlugins/group/custom-flannel/DNS 0.45
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.4
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.42
339 TestNetworkPlugins/group/false/KubeletFlags 4.33
340 TestNetworkPlugins/group/false/NetCatPod 32.25
341 TestNetworkPlugins/group/flannel/Start 168.98
342 TestNetworkPlugins/group/false/DNS 0.48
343 TestNetworkPlugins/group/false/Localhost 1.28
344 TestNetworkPlugins/group/false/HairPin 1.18
345 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 4.26
346 TestNetworkPlugins/group/enable-default-cni/NetCatPod 18.15
347 TestNetworkPlugins/group/enable-default-cni/DNS 0.52
348 TestNetworkPlugins/group/enable-default-cni/Localhost 0.45
349 TestNetworkPlugins/group/enable-default-cni/HairPin 0.49
350 TestNetworkPlugins/group/kubenet/Start 197.29
351 TestNetworkPlugins/group/flannel/ControllerPod 6.18
352 TestNetworkPlugins/group/flannel/KubeletFlags 4.43
353 TestNetworkPlugins/group/flannel/NetCatPod 15.78
354 TestNetworkPlugins/group/flannel/DNS 0.53
355 TestNetworkPlugins/group/flannel/Localhost 0.44
356 TestNetworkPlugins/group/flannel/HairPin 0.44
357 TestNetworkPlugins/group/bridge/Start 158.53
358 TestNetworkPlugins/group/kubenet/KubeletFlags 4.19
359 TestNetworkPlugins/group/kubenet/NetCatPod 16.65
360 TestNetworkPlugins/group/kubenet/DNS 0.45
361 TestNetworkPlugins/group/kubenet/Localhost 0.49
362 TestNetworkPlugins/group/kubenet/HairPin 0.42
363 TestNetworkPlugins/group/bridge/KubeletFlags 4.17
364 TestNetworkPlugins/group/bridge/NetCatPod 15.66
365 TestNetworkPlugins/group/bridge/DNS 0.47
366 TestNetworkPlugins/group/bridge/Localhost 0.41
367 TestNetworkPlugins/group/bridge/HairPin 0.4
x
+
TestDownloadOnly/v1.16.0/json-events (12.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-597800 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-597800 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (12.2962631s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-597800
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-597800: exit status 85 (309.9122ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-597800 | minikube1\jenkins | v1.30.1 | 24 May 23 18:39 UTC |          |
	|         | -p download-only-597800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 18:39:38
	Running on machine: minikube1
	Binary: Built with gc go1.20.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 18:39:38.660337    9844 out.go:296] Setting OutFile to fd 580 ...
	I0524 18:39:38.721755    9844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:39:38.721755    9844 out.go:309] Setting ErrFile to fd 584...
	I0524 18:39:38.721755    9844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0524 18:39:38.733865    9844 root.go:312] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0524 18:39:38.745774    9844 out.go:303] Setting JSON to true
	I0524 18:39:38.748208    9844 start.go:125] hostinfo: {"hostname":"minikube1","uptime":2091,"bootTime":1684951486,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 18:39:38.748208    9844 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 18:39:38.764186    9844 out.go:97] [download-only-597800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 18:39:38.764619    9844 notify.go:220] Checking for updates...
	I0524 18:39:38.767324    9844 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	W0524 18:39:38.764674    9844 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0524 18:39:38.774546    9844 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 18:39:38.776846    9844 out.go:169] MINIKUBE_LOCATION=16573
	I0524 18:39:38.779466    9844 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0524 18:39:38.784567    9844 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0524 18:39:38.786020    9844 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 18:39:41.220641    9844 out.go:97] Using the hyperv driver based on user configuration
	I0524 18:39:41.220641    9844 start.go:295] selected driver: hyperv
	I0524 18:39:41.220641    9844 start.go:870] validating driver "hyperv" against <nil>
	I0524 18:39:41.220641    9844 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0524 18:39:41.276706    9844 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0524 18:39:41.277401    9844 start_flags.go:897] Wait components to verify : map[apiserver:true system_pods:true]
	I0524 18:39:41.277401    9844 cni.go:84] Creating CNI manager for ""
	I0524 18:39:41.277401    9844 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0524 18:39:41.277401    9844 start_flags.go:319] config:
	{Name:download-only-597800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-597800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 18:39:41.278545    9844 iso.go:125] acquiring lock: {Name:mk3b29db369ab0f922ac5eeb788beee87e18ec94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 18:39:41.281090    9844 out.go:97] Downloading VM boot image ...
	I0524 18:39:41.282097    9844 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso.sha256 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.30.1-1684536668-16501-amd64.iso
	I0524 18:39:44.608946    9844 out.go:97] Starting control plane node download-only-597800 in cluster download-only-597800
	I0524 18:39:44.608946    9844 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 18:39:44.669530    9844 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0524 18:39:44.669530    9844 cache.go:57] Caching tarball of preloaded images
	I0524 18:39:44.670079    9844 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0524 18:39:44.673552    9844 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0524 18:39:44.673621    9844 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0524 18:39:44.762701    9844 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0524 18:39:48.259199    9844 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0524 18:39:48.259199    9844 preload.go:256] verifying checksum of C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-597800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (8.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-597800 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-597800 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=hyperv: (8.0840337s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (8.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
--- PASS: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-597800
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-597800: exit status 85 (292.7344ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-597800 | minikube1\jenkins | v1.30.1 | 24 May 23 18:39 UTC |          |
	|         | -p download-only-597800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-597800 | minikube1\jenkins | v1.30.1 | 24 May 23 18:39 UTC |          |
	|         | -p download-only-597800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/05/24 18:39:51
	Running on machine: minikube1
	Binary: Built with gc go1.20.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0524 18:39:51.337806    8808 out.go:296] Setting OutFile to fd 612 ...
	I0524 18:39:51.404089    8808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:39:51.404089    8808 out.go:309] Setting ErrFile to fd 560...
	I0524 18:39:51.404089    8808 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0524 18:39:51.416299    8808 root.go:312] Error reading config file at C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube1\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0524 18:39:51.424779    8808 out.go:303] Setting JSON to true
	I0524 18:39:51.426986    8808 start.go:125] hostinfo: {"hostname":"minikube1","uptime":2104,"bootTime":1684951486,"procs":150,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 18:39:51.426986    8808 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 18:39:51.431973    8808 out.go:97] [download-only-597800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 18:39:51.432175    8808 notify.go:220] Checking for updates...
	I0524 18:39:51.434528    8808 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 18:39:51.437336    8808 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 18:39:51.439968    8808 out.go:169] MINIKUBE_LOCATION=16573
	I0524 18:39:51.442505    8808 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0524 18:39:51.448743    8808 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0524 18:39:51.449855    8808 config.go:182] Loaded profile config "download-only-597800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0524 18:39:51.450210    8808 start.go:778] api.Load failed for download-only-597800: filestore "download-only-597800": Docker machine "download-only-597800" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0524 18:39:51.450506    8808 driver.go:375] Setting default libvirt URI to qemu:///system
	W0524 18:39:51.450642    8808 start.go:778] api.Load failed for download-only-597800: filestore "download-only-597800": Docker machine "download-only-597800" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0524 18:39:53.345499    8808 out.go:97] Using the hyperv driver based on existing profile
	I0524 18:39:53.346207    8808 start.go:295] selected driver: hyperv
	I0524 18:39:53.346245    8808 start.go:870] validating driver "hyperv" against &{Name:download-only-597800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-597800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 18:39:53.403001    8808 cni.go:84] Creating CNI manager for ""
	I0524 18:39:53.403082    8808 cni.go:157] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0524 18:39:53.403082    8808 start_flags.go:319] config:
	{Name:download-only-597800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-597800 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 18:39:53.403522    8808 iso.go:125] acquiring lock: {Name:mk3b29db369ab0f922ac5eeb788beee87e18ec94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0524 18:39:53.406920    8808 out.go:97] Starting control plane node download-only-597800 in cluster download-only-597800
	I0524 18:39:53.407002    8808 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 18:39:53.446742    8808 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	I0524 18:39:53.446852    8808 cache.go:57] Caching tarball of preloaded images
	I0524 18:39:53.447295    8808 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0524 18:39:53.450137    8808 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0524 18:39:53.450137    8808 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4 ...
	I0524 18:39:53.502325    8808 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4?checksum=md5:1858f4460df043b5f83c3d1ea676dbc0 -> C:\Users\jenkins.minikube1\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.2-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-597800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.57s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.565807s)
--- PASS: TestDownloadOnly/DeleteAll (1.57s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.71s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-597800
aaa_download_only_test.go:199: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-597800: (1.7105768s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.71s)

                                                
                                    
x
+
TestBinaryMirror (3.67s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-796000 --alsologtostderr --binary-mirror http://127.0.0.1:51755 --driver=hyperv
aaa_download_only_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-796000 --alsologtostderr --binary-mirror http://127.0.0.1:51755 --driver=hyperv: (2.7391236s)
helpers_test.go:175: Cleaning up "binary-mirror-796000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-796000
--- PASS: TestBinaryMirror (3.67s)

                                                
                                    
x
+
TestOffline (274.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-893100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-893100 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m45.2186792s)
helpers_test.go:175: Cleaning up "offline-docker-893100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-893100
E0524 20:05:14.594263    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 20:05:31.358919    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-893100: (49.0635505s)
--- PASS: TestOffline (274.28s)

                                                
                                    
x
+
TestAddons/Setup (301s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-830700 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-830700 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (5m0.9961333s)
--- PASS: TestAddons/Setup (301.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (22.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 36.1545ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-skjd8" [f8df0442-aa1f-4b6f-8b43-36430ef3c814] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.024235s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6tfr6" [7ee49aac-bf9c-490a-bbf5-7e844b2b9e4e] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0374471s
addons_test.go:316: (dbg) Run:  kubectl --context addons-830700 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-830700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-830700 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.3764949s)
addons_test.go:335: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-830700 ip
addons_test.go:335: (dbg) Done: out/minikube-windows-amd64.exe -p addons-830700 ip: (1.0121522s)
2023/05/24 18:45:26 [DEBUG] GET http://172.27.140.189:5000
addons_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-830700 addons disable registry --alsologtostderr -v=1
addons_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p addons-830700 addons disable registry --alsologtostderr -v=1: (4.2951119s)
--- PASS: TestAddons/parallel/Registry (22.08s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (45.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-830700 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-830700 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-830700 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [41c00cd2-115c-49fb-9c3d-427aecee8c59] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [41c00cd2-115c-49fb-9c3d-427aecee8c59] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 21.5411575s
addons_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-830700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p addons-830700 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (3.8623763s)
addons_test.go:262: (dbg) Run:  kubectl --context addons-830700 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-830700 ip
addons_test.go:267: (dbg) Done: out/minikube-windows-amd64.exe -p addons-830700 ip: (1.0176272s)
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 172.27.140.189
addons_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-830700 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p addons-830700 addons disable ingress-dns --alsologtostderr -v=1: (4.7240711s)
addons_test.go:287: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-830700 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-windows-amd64.exe -p addons-830700 addons disable ingress --alsologtostderr -v=1: (10.9153426s)
--- PASS: TestAddons/parallel/Ingress (45.27s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (14.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-cdrzx" [224d344a-3e13-4b70-bf13-63f2d794533a] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0492116s
addons_test.go:817: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-830700
addons_test.go:817: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-830700: (9.711011s)
--- PASS: TestAddons/parallel/InspektorGadget (14.77s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (9.77s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 40.5708ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-nqgcx" [e4ed5950-e65b-4e24-bcbe-a27049baf53d] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0363555s
addons_test.go:391: (dbg) Run:  kubectl --context addons-830700 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-830700 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p addons-830700 addons disable metrics-server --alsologtostderr -v=1: (4.3140536s)
--- PASS: TestAddons/parallel/MetricsServer (9.77s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (19.53s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 30.9394ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-rphjn" [bdabfa14-bc29-4a69-9a14-d2339b3eaba5] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0237863s
addons_test.go:449: (dbg) Run:  kubectl --context addons-830700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-830700 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (10.7218867s)
addons_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-830700 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p addons-830700 addons disable helm-tiller --alsologtostderr -v=1: (3.7371398s)
--- PASS: TestAddons/parallel/HelmTiller (19.53s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 9.0949ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-830700 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830700 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-830700 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ec07d18a-1623-4e38-947a-2171cdd882c6] Pending
helpers_test.go:344: "task-pv-pod" [ec07d18a-1623-4e38-947a-2171cdd882c6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ec07d18a-1623-4e38-947a-2171cdd882c6] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 26.0536756s
addons_test.go:560: (dbg) Run:  kubectl --context addons-830700 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-830700 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-830700 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-830700 delete pod task-pv-pod: (1.7777978s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-830700 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-830700 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-830700 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-830700 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [41b245fe-8f92-42e3-a9e3-7b83e8a1d151] Pending
helpers_test.go:344: "task-pv-pod-restore" [41b245fe-8f92-42e3-a9e3-7b83e8a1d151] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [41b245fe-8f92-42e3-a9e3-7b83e8a1d151] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.0223151s
addons_test.go:602: (dbg) Run:  kubectl --context addons-830700 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-830700 delete pod task-pv-pod-restore: (1.1556971s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-830700 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-830700 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-830700 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-windows-amd64.exe -p addons-830700 addons disable csi-hostpath-driver --alsologtostderr -v=1: (9.6843685s)
addons_test.go:618: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-830700 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:618: (dbg) Done: out/minikube-windows-amd64.exe -p addons-830700 addons disable volumesnapshots --alsologtostderr -v=1: (3.6262425s)
--- PASS: TestAddons/parallel/CSI (57.45s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (27.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-830700 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-830700 --alsologtostderr -v=1: (5.9239471s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-r4w5b" [50d726d9-865f-4677-89fa-125333515909] Pending
helpers_test.go:344: "headlamp-6b5756787-r4w5b" [50d726d9-865f-4677-89fa-125333515909] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-r4w5b" [50d726d9-865f-4677-89fa-125333515909] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 22.0286266s
--- PASS: TestAddons/parallel/Headlamp (27.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (8.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cf587f8d-g8mm2" [aa51402f-a20a-40f4-b2db-c1fecef44847] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0127052s
addons_test.go:836: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-830700
addons_test.go:836: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-830700: (3.8750085s)
--- PASS: TestAddons/parallel/CloudSpanner (8.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-830700 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-830700 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (29.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-830700
addons_test.go:148: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-830700: (25.4583106s)
addons_test.go:152: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-830700
addons_test.go:152: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-830700: (1.7329895s)
addons_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-830700
addons_test.go:156: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-830700: (1.4112415s)
addons_test.go:161: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-830700
--- PASS: TestAddons/StoppedEnableDisable (29.39s)

                                                
                                    
x
+
TestCertOptions (229.49s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-281000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-281000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (2m52.7106824s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-281000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-281000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (4.2021276s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-281000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-281000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-281000 -- "sudo cat /etc/kubernetes/admin.conf": (4.3146783s)
helpers_test.go:175: Cleaning up "cert-options-281000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-281000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-281000: (48.0270774s)
--- PASS: TestCertOptions (229.49s)

                                                
                                    
x
+
TestCertExpiration (655.22s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-496400 --memory=2048 --cert-expiration=3m --driver=hyperv
E0524 20:10:08.986222    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-496400 --memory=2048 --cert-expiration=3m --driver=hyperv: (4m6.0156509s)
E0524 20:13:39.911350    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-496400 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-496400 --memory=2048 --cert-expiration=8760h --driver=hyperv: (3m9.3655579s)
helpers_test.go:175: Cleaning up "cert-expiration-496400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-496400
E0524 20:20:09.000519    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-496400: (39.8352768s)
--- PASS: TestCertExpiration (655.22s)

                                                
                                    
x
+
TestDockerFlags (232.93s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-114900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-114900 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (3m12.9440797s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-114900 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-114900 ssh "sudo systemctl show docker --property=Environment --no-pager": (4.1062144s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-114900 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-114900 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (4.0499615s)
helpers_test.go:175: Cleaning up "docker-flags-114900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-114900
E0524 20:15:08.989917    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-114900: (31.8318524s)
--- PASS: TestDockerFlags (232.93s)

                                                
                                    
x
+
TestForceSystemdFlag (208.8s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-052200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-052200 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (2m54.392083s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-052200 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-052200 ssh "docker info --format {{.CgroupDriver}}": (4.3916159s)
helpers_test.go:175: Cleaning up "force-systemd-flag-052200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-052200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-052200: (30.0175674s)
--- PASS: TestForceSystemdFlag (208.80s)

                                                
                                    
x
+
TestForceSystemdEnv (211.87s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-718100 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-718100 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (2m35.6336066s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-718100 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-718100 ssh "docker info --format {{.CgroupDriver}}": (4.3402433s)
helpers_test.go:175: Cleaning up "force-systemd-env-718100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-718100
E0524 20:10:31.358242    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-718100: (51.8968252s)
--- PASS: TestForceSystemdEnv (211.87s)

                                                
                                    
x
+
TestErrorSpam/setup (119.74s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-071800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 --driver=hyperv
E0524 18:50:08.989174    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 18:50:09.004494    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 18:50:09.019736    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 18:50:09.050788    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 18:50:09.097486    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 18:50:09.192782    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 18:50:09.366453    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 18:50:09.694265    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 18:50:10.345255    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 18:50:11.626314    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-071800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 --driver=hyperv: (1m59.7403367s)
error_spam_test.go:91: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.2."
--- PASS: TestErrorSpam/setup (119.74s)

                                                
                                    
x
+
TestErrorSpam/start (6.08s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 start --dry-run
E0524 18:50:14.195334    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 start --dry-run: (1.9716227s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 start --dry-run: (2.0974767s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 start --dry-run
E0524 18:50:19.323688    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 start --dry-run: (2.0103513s)
--- PASS: TestErrorSpam/start (6.08s)

                                                
                                    
x
+
TestErrorSpam/status (15.32s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 status: (5.1462615s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 status
E0524 18:50:29.569689    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 status: (5.15802s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 status: (5.017532s)
--- PASS: TestErrorSpam/status (15.32s)

                                                
                                    
x
+
TestErrorSpam/pause (10.27s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 pause: (3.6750843s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 pause: (3.2951226s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 pause: (3.2949453s)
--- PASS: TestErrorSpam/pause (10.27s)

                                                
                                    
x
+
TestErrorSpam/unpause (10.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 unpause: (3.5611079s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 unpause
E0524 18:50:50.056018    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 unpause: (3.3679124s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 unpause: (3.4328126s)
--- PASS: TestErrorSpam/unpause (10.36s)

                                                
                                    
x
+
TestErrorSpam/stop (30.7s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 stop: (19.1954494s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 stop: (5.6622671s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-071800 --log_dir C:\Users\jenkins.minikube1\AppData\Local\Temp\nospam-071800 stop: (5.8385255s)
--- PASS: TestErrorSpam/stop (30.70s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: C:\Users\jenkins.minikube1\minikube-integration\.minikube\files\etc\test\nested\copy\6560\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (133.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-644800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0524 18:52:52.964365    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
functional_test.go:2229: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-644800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (2m13.7296793s)
--- PASS: TestFunctional/serial/StartWithProxy (133.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (70.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-644800 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-644800 --alsologtostderr -v=8: (1m10.5828322s)
functional_test.go:658: soft start took 1m10.5841626s for "functional-644800" cluster.
--- PASS: TestFunctional/serial/SoftStart (70.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.19s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-644800 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (14.76s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 cache add registry.k8s.io/pause:3.1: (4.912835s)
functional_test.go:1044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 cache add registry.k8s.io/pause:3.3: (4.9010208s)
functional_test.go:1044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 cache add registry.k8s.io/pause:latest
E0524 18:55:09.001555    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
functional_test.go:1044: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 cache add registry.k8s.io/pause:latest: (4.9484605s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (14.76s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (6.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-644800 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2138372279\001
functional_test.go:1072: (dbg) Done: docker build -t minikube-local-cache-test:functional-644800 C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2138372279\001: (1.5148924s)
functional_test.go:1084: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 cache add minikube-local-cache-test:functional-644800
functional_test.go:1084: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 cache add minikube-local-cache-test:functional-644800: (4.0892575s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 cache delete minikube-local-cache-test:functional-644800
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-644800
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (6.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh sudo crictl images
functional_test.go:1119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh sudo crictl images: (3.8470395s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (16.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1142: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh sudo docker rmi registry.k8s.io/pause:latest: (3.8396251s)
functional_test.go:1148: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-644800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (3.7795456s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 cache reload: (4.5001838s)
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0524 18:55:36.811773    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (3.9214595s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (16.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.50s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 kubectl -- --context functional-644800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out\kubectl.exe --context functional-644800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.42s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (78.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-644800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-644800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m18.26477s)
functional_test.go:756: restart took 1m18.2650668s for "functional-644800" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (78.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-644800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (4.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 logs
functional_test.go:1231: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 logs: (4.3996161s)
--- PASS: TestFunctional/serial/LogsCmd (4.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1744728835\001\logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 logs --file C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1744728835\001\logs.txt: (5.2586849s)
--- PASS: TestFunctional/serial/LogsFileCmd (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-644800 config get cpus: exit status 14 (235.7831ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-644800 config get cpus: exit status 14 (249.4424ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (4.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-644800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:969: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-644800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 23 (2.2686575s)

                                                
                                                
-- stdout --
	* [functional-644800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 18:58:33.752190    2944 out.go:296] Setting OutFile to fd 716 ...
	I0524 18:58:33.832821    2944 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:58:33.832821    2944 out.go:309] Setting ErrFile to fd 696...
	I0524 18:58:33.832821    2944 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:58:33.852286    2944 out.go:303] Setting JSON to false
	I0524 18:58:33.855800    2944 start.go:125] hostinfo: {"hostname":"minikube1","uptime":3226,"bootTime":1684951486,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 18:58:33.855800    2944 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 18:58:33.863157    2944 out.go:177] * [functional-644800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 18:58:33.866128    2944 notify.go:220] Checking for updates...
	I0524 18:58:33.869030    2944 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 18:58:33.871030    2944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 18:58:33.873487    2944 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 18:58:33.876701    2944 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 18:58:33.879216    2944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 18:58:33.881840    2944 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 18:58:33.883235    2944 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 18:58:35.782221    2944 out.go:177] * Using the hyperv driver based on existing profile
	I0524 18:58:35.784019    2944 start.go:295] selected driver: hyperv
	I0524 18:58:35.784019    2944 start.go:870] validating driver "hyperv" against &{Name:functional-644800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.2 ClusterName:functional-644800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.27.143.207 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 18:58:35.784019    2944 start.go:881] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 18:58:35.842537    2944 out.go:177] 
	W0524 18:58:35.845149    2944 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0524 18:58:35.848346    2944 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-644800 --dry-run --alsologtostderr -v=1 --driver=hyperv
functional_test.go:986: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-644800 --dry-run --alsologtostderr -v=1 --driver=hyperv: (2.1599608s)
--- PASS: TestFunctional/parallel/DryRun (4.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-644800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-644800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 23 (2.2075758s)

                                                
                                                
-- stdout --
	* [functional-644800] minikube v1.30.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote hyperv basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 18:58:30.386139   12112 out.go:296] Setting OutFile to fd 932 ...
	I0524 18:58:30.458007   12112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:58:30.458007   12112 out.go:309] Setting ErrFile to fd 640...
	I0524 18:58:30.458007   12112 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 18:58:30.480283   12112 out.go:303] Setting JSON to false
	I0524 18:58:30.483216   12112 start.go:125] hostinfo: {"hostname":"minikube1","uptime":3223,"bootTime":1684951486,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.2965 Build 19045.2965","kernelVersion":"10.0.19045.2965 Build 19045.2965","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f3192dc9-6fb2-4797-bdaa-5f567903ef41"}
	W0524 18:58:30.484216   12112 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0524 18:58:30.494383   12112 out.go:177] * [functional-644800] minikube v1.30.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	I0524 18:58:30.497363   12112 notify.go:220] Checking for updates...
	I0524 18:58:30.499531   12112 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	I0524 18:58:30.501374   12112 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0524 18:58:30.504568   12112 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	I0524 18:58:30.506848   12112 out.go:177]   - MINIKUBE_LOCATION=16573
	I0524 18:58:30.509486   12112 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0524 18:58:30.513181   12112 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 18:58:30.514162   12112 driver.go:375] Setting default libvirt URI to qemu:///system
	I0524 18:58:32.349968   12112 out.go:177] * Utilisation du pilote hyperv basé sur le profil existant
	I0524 18:58:32.352206   12112 start.go:295] selected driver: hyperv
	I0524 18:58:32.352206   12112 start.go:870] validating driver "hyperv" against &{Name:functional-644800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16501/minikube-v1.30.1-1684536668-16501-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1684536746-16501@sha256:f5d93abf1d1cfb142a7cf0b58b24029595d621e5f943105b16c61199094d77de Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.2 ClusterName:functional-644800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.27.143.207 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube1:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0524 18:58:32.353427   12112 start.go:881] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0524 18:58:32.412909   12112 out.go:177] 
	W0524 18:58:32.416488   12112 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0524 18:58:32.420021   12112 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (16.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 status
functional_test.go:849: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 status: (5.5295471s)
functional_test.go:855: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:855: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (5.6382244s)
functional_test.go:867: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 status -o json
functional_test.go:867: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 status -o json: (5.4130776s)
--- PASS: TestFunctional/parallel/StatusCmd (16.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-644800 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-644800 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-lsvl5" [bae8d5c5-1dff-4d45-9b50-df0f4034a375] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-lsvl5" [bae8d5c5-1dff-4d45-9b50-df0f4034a375] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 20.0291264s
functional_test.go:1647: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 service hello-node-connect --url
functional_test.go:1647: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 service hello-node-connect --url: (7.6472247s)
functional_test.go:1653: found endpoint for hello-node-connect: http://172.27.143.207:31994
functional_test.go:1673: http://172.27.143.207:31994: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-lsvl5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.27.143.207:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.27.143.207:31994
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.39s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [cc0adf5a-52d5-49d7-a558-fe8fdb7600d2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0345282s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-644800 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-644800 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-644800 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-644800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d5c7e513-eda8-494e-b4f8-b33c25eeec57] Pending
helpers_test.go:344: "sp-pod" [d5c7e513-eda8-494e-b4f8-b33c25eeec57] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d5c7e513-eda8-494e-b4f8-b33c25eeec57] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 23.0659757s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-644800 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-644800 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-644800 delete -f testdata/storage-provisioner/pod.yaml: (1.7517956s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-644800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f0fd4734-b8be-41c4-8332-b82294568673] Pending
helpers_test.go:344: "sp-pod" [f0fd4734-b8be-41c4-8332-b82294568673] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f0fd4734-b8be-41c4-8332-b82294568673] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0237337s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-644800 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.65s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (8.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh "echo hello"
functional_test.go:1723: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh "echo hello": (4.5089061s)
functional_test.go:1740: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh "cat /etc/hostname"
functional_test.go:1740: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh "cat /etc/hostname": (4.2577238s)
--- PASS: TestFunctional/parallel/SSHCmd (8.77s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (17.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 cp testdata\cp-test.txt /home/docker/cp-test.txt: (3.9836297s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh -n functional-644800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh -n functional-644800 "sudo cat /home/docker/cp-test.txt": (4.2469044s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 cp functional-644800:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd674291455\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 cp functional-644800:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestFunctionalparallelCpCmd674291455\001\cp-test.txt: (4.4535954s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh -n functional-644800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh -n functional-644800 "sudo cat /home/docker/cp-test.txt": (4.9164342s)
--- PASS: TestFunctional/parallel/CpCmd (17.61s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (56.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-644800 replace --force -f testdata\mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-xjz6q" [e21aa901-02c9-4d72-a51d-e7310f234dfe] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-xjz6q" [e21aa901-02c9-4d72-a51d-e7310f234dfe] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 42.0406784s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;": exit status 1 (753.7668ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;": exit status 1 (492.5282ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;": exit status 1 (455.1507ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;": exit status 1 (599.8806ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;": exit status 1 (837.8354ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-644800 exec mysql-7db894d786-xjz6q -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (56.83s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (4.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/6560/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /etc/test/nested/copy/6560/hosts"
functional_test.go:1926: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /etc/test/nested/copy/6560/hosts": (4.3736614s)
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (26.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/6560.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /etc/ssl/certs/6560.pem"
functional_test.go:1968: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /etc/ssl/certs/6560.pem": (4.4176796s)
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/6560.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /usr/share/ca-certificates/6560.pem"
functional_test.go:1968: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /usr/share/ca-certificates/6560.pem": (4.1968642s)
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1968: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /etc/ssl/certs/51391683.0": (4.6573908s)
functional_test.go:1994: Checking for existence of /etc/ssl/certs/65602.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /etc/ssl/certs/65602.pem"
functional_test.go:1995: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /etc/ssl/certs/65602.pem": (4.0913168s)
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/65602.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /usr/share/ca-certificates/65602.pem"
functional_test.go:1995: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /usr/share/ca-certificates/65602.pem": (4.5768783s)
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1995: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (4.2700151s)
--- PASS: TestFunctional/parallel/CertSync (26.21s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-644800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (4.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-644800 ssh "sudo systemctl is-active crio": exit status 1 (4.0902957s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (3.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2283: (dbg) Done: out/minikube-windows-amd64.exe license: (3.0201432s)
--- PASS: TestFunctional/parallel/License (3.04s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (19.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:494: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-644800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-644800"
functional_test.go:494: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-644800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-644800": (12.5081876s)
functional_test.go:517: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-644800 docker-env | Invoke-Expression ; docker images"
functional_test.go:517: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-644800 docker-env | Invoke-Expression ; docker images": (6.5190577s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (19.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 update-context --alsologtostderr -v=2
functional_test.go:2114: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 update-context --alsologtostderr -v=2: (1.1285759s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 update-context --alsologtostderr -v=2
functional_test.go:2114: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 update-context --alsologtostderr -v=2: (1.1441323s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 update-context --alsologtostderr -v=2
functional_test.go:2114: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 update-context --alsologtostderr -v=2: (1.065433s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image ls --format short --alsologtostderr
functional_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image ls --format short --alsologtostderr: (3.3170563s)
functional_test.go:264: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-644800 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-644800
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-644800
functional_test.go:267: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-644800 image ls --format short --alsologtostderr:
I0524 18:58:51.333674    6944 out.go:296] Setting OutFile to fd 860 ...
I0524 18:58:51.422211    6944 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:58:51.422211    6944 out.go:309] Setting ErrFile to fd 656...
I0524 18:58:51.422211    6944 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:58:51.437004    6944 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 18:58:51.437004    6944 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 18:58:51.438242    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-644800 ).state
I0524 18:58:52.276518    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0524 18:58:52.276579    6944 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:52.288279    6944 ssh_runner.go:195] Run: systemctl --version
I0524 18:58:52.288279    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-644800 ).state
I0524 18:58:53.074229    6944 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0524 18:58:53.074296    6944 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:53.074296    6944 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-644800 ).networkadapters[0]).ipaddresses[0]
I0524 18:58:54.262904    6944 main.go:141] libmachine: [stdout =====>] : 172.27.143.207

                                                
                                                
I0524 18:58:54.262904    6944 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:54.263469    6944 sshutil.go:53] new ssh client: &{IP:172.27.143.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-644800\id_rsa Username:docker}
I0524 18:58:54.384164    6944 ssh_runner.go:235] Completed: systemctl --version: (2.0958852s)
I0524 18:58:54.396464    6944 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image ls --format table --alsologtostderr
functional_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image ls --format table --alsologtostderr: (3.2282028s)
functional_test.go:264: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-644800 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | latest            | a7be6198544f0 | 142MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-644800 | 26164dceb6d09 | 30B    |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-apiserver              | v1.27.2           | c5b13e4f7806d | 121MB  |
| registry.k8s.io/kube-scheduler              | v1.27.2           | 89e70da428d29 | 58.4MB |
| docker.io/library/mysql                     | 5.7               | dd6675b5cfea1 | 569MB  |
| docker.io/library/nginx                     | alpine            | 8e75cbc5b25c8 | 41MB   |
| registry.k8s.io/etcd                        | 3.5.7-0           | 86b6af7dd652c | 296MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/kube-controller-manager     | v1.27.2           | ac2b7465ebba9 | 112MB  |
| registry.k8s.io/kube-proxy                  | v1.27.2           | b8aa50768fd67 | 71.1MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-644800 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-644800 image ls --format table --alsologtostderr:
I0524 18:58:52.342637   10608 out.go:296] Setting OutFile to fd 860 ...
I0524 18:58:52.417036   10608 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:58:52.417036   10608 out.go:309] Setting ErrFile to fd 656...
I0524 18:58:52.417036   10608 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:58:52.436171   10608 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 18:58:52.436789   10608 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 18:58:52.437404   10608 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-644800 ).state
I0524 18:58:53.264512   10608 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0524 18:58:53.264590   10608 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:53.278454   10608 ssh_runner.go:195] Run: systemctl --version
I0524 18:58:53.278454   10608 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-644800 ).state
I0524 18:58:54.058112   10608 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0524 18:58:54.058112   10608 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:54.058112   10608 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-644800 ).networkadapters[0]).ipaddresses[0]
I0524 18:58:55.240576   10608 main.go:141] libmachine: [stdout =====>] : 172.27.143.207

                                                
                                                
I0524 18:58:55.240576   10608 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:55.240861   10608 sshutil.go:53] new ssh client: &{IP:172.27.143.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-644800\id_rsa Username:docker}
I0524 18:58:55.348276   10608 ssh_runner.go:235] Completed: systemctl --version: (2.0697772s)
I0524 18:58:55.357585   10608 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (3.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image ls --format json --alsologtostderr
functional_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image ls --format json --alsologtostderr: (3.1776525s)
functional_test.go:264: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-644800 image ls --format json --alsologtostderr:
[{"id":"26164dceb6d09d06bd5974a39bfd2c9b626db5c06a99bb9b057a24e7c67f9425","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-644800"],"size":"30"},{"id":"ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.2"],"size":"112000000"},{"id":"89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.2"],"size":"58400000"},{"id":"b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.2"],"size":"71100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"296000000"},{"id":"c5b13e4f7806de1
dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.2"],"size":"121000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-644800"],"size":"32900000"},{"id":"8e75cbc5b25c8438fcfe2e7c12c98409d5f161cbb668d6c444e02796691ada70","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"a7be6198544f09a75b26e6376459b47c5b9972e7aa742af9f356b540fe852cd4","repoDiges
ts":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"dd6675b5cfea17abb655ea8229cbcfa5db9d0b041f839db0c24228c2e18a4bdf","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"569000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:267: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-644800 image ls --format json --alsologtostderr:
I0524 18:58:49.178609   12056 out.go:296] Setting OutFile to fd 988 ...
I0524 18:58:49.249601   12056 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:58:49.249601   12056 out.go:309] Setting ErrFile to fd 796...
I0524 18:58:49.249878   12056 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:58:49.269644   12056 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 18:58:49.269644   12056 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 18:58:49.270897   12056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-644800 ).state
I0524 18:58:50.063018   12056 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0524 18:58:50.063018   12056 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:50.075663   12056 ssh_runner.go:195] Run: systemctl --version
I0524 18:58:50.075663   12056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-644800 ).state
I0524 18:58:50.864866   12056 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0524 18:58:50.864978   12056 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:50.864978   12056 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-644800 ).networkadapters[0]).ipaddresses[0]
I0524 18:58:52.014492   12056 main.go:141] libmachine: [stdout =====>] : 172.27.143.207

                                                
                                                
I0524 18:58:52.014710   12056 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:52.015223   12056 sshutil.go:53] new ssh client: &{IP:172.27.143.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-644800\id_rsa Username:docker}
I0524 18:58:52.119591   12056 ssh_runner.go:235] Completed: systemctl --version: (2.0438554s)
I0524 18:58:52.127993   12056 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (3.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image ls --format yaml --alsologtostderr
functional_test.go:259: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image ls --format yaml --alsologtostderr: (3.1961893s)
functional_test.go:264: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-644800 image ls --format yaml --alsologtostderr:
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 26164dceb6d09d06bd5974a39bfd2c9b626db5c06a99bb9b057a24e7c67f9425
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-644800
size: "30"
- id: a7be6198544f09a75b26e6376459b47c5b9972e7aa742af9f356b540fe852cd4
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-644800
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: b8aa50768fd675409bd7edcc4f6a18290dad5d9c2515aad12d32174dc13e7dee
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "71100000"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "296000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: c5b13e4f7806de1dcc1c1146c7ec7c89d77ac340c3695118cf84bb0b5f989370
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "121000000"
- id: dd6675b5cfea17abb655ea8229cbcfa5db9d0b041f839db0c24228c2e18a4bdf
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "569000000"
- id: 8e75cbc5b25c8438fcfe2e7c12c98409d5f161cbb668d6c444e02796691ada70
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ac2b7465ebba99362b6ea11fca1357b90ae6854b4464a25c55e6eef622103e12
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "112000000"
- id: 89e70da428d29a45b89f5daa196229ceddea947f4708b3a61669e0069cb6b8b0
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "58400000"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-644800 image ls --format yaml --alsologtostderr:
I0524 18:58:55.580474    7480 out.go:296] Setting OutFile to fd 984 ...
I0524 18:58:55.655774    7480 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:58:55.655774    7480 out.go:309] Setting ErrFile to fd 920...
I0524 18:58:55.655774    7480 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:58:55.670061    7480 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 18:58:55.670061    7480 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 18:58:55.671689    7480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-644800 ).state
I0524 18:58:56.478508    7480 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0524 18:58:56.478508    7480 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:56.491390    7480 ssh_runner.go:195] Run: systemctl --version
I0524 18:58:56.491390    7480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-644800 ).state
I0524 18:58:57.283228    7480 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0524 18:58:57.283228    7480 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:57.283483    7480 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-644800 ).networkadapters[0]).ipaddresses[0]
I0524 18:58:58.454102    7480 main.go:141] libmachine: [stdout =====>] : 172.27.143.207

                                                
                                                
I0524 18:58:58.454102    7480 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:58.454488    7480 sshutil.go:53] new ssh client: &{IP:172.27.143.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-644800\id_rsa Username:docker}
I0524 18:58:58.556933    7480 ssh_runner.go:235] Completed: systemctl --version: (2.0655427s)
I0524 18:58:58.566297    7480 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (14.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-644800 ssh pgrep buildkitd: exit status 1 (3.9869626s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image build -t localhost/my-image:functional-644800 testdata\build --alsologtostderr
functional_test.go:313: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image build -t localhost/my-image:functional-644800 testdata\build --alsologtostderr: (7.1359346s)
functional_test.go:318: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-644800 image build -t localhost/my-image:functional-644800 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in f4f17bb1beb9
Removing intermediate container f4f17bb1beb9
---> 2b001579d4e0
Step 3/3 : ADD content.txt /
---> 8ffa4471d460
Successfully built 8ffa4471d460
Successfully tagged localhost/my-image:functional-644800
functional_test.go:321: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-644800 image build -t localhost/my-image:functional-644800 testdata\build --alsologtostderr:
I0524 18:58:58.632109    5184 out.go:296] Setting OutFile to fd 920 ...
I0524 18:58:58.711058    5184 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:58:58.711058    5184 out.go:309] Setting ErrFile to fd 1008...
I0524 18:58:58.711167    5184 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0524 18:58:58.724749    5184 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 18:58:58.741232    5184 config.go:182] Loaded profile config "functional-644800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0524 18:58:58.742055    5184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-644800 ).state
I0524 18:58:59.542297    5184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0524 18:58:59.542559    5184 main.go:141] libmachine: [stderr =====>] : 
I0524 18:58:59.554068    5184 ssh_runner.go:195] Run: systemctl --version
I0524 18:58:59.554068    5184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-644800 ).state
I0524 18:59:00.341554    5184 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0524 18:59:00.341554    5184 main.go:141] libmachine: [stderr =====>] : 
I0524 18:59:00.341801    5184 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-644800 ).networkadapters[0]).ipaddresses[0]
I0524 18:59:01.463572    5184 main.go:141] libmachine: [stdout =====>] : 172.27.143.207

                                                
                                                
I0524 18:59:01.463572    5184 main.go:141] libmachine: [stderr =====>] : 
I0524 18:59:01.463572    5184 sshutil.go:53] new ssh client: &{IP:172.27.143.207 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\functional-644800\id_rsa Username:docker}
I0524 18:59:01.566427    5184 ssh_runner.go:235] Completed: systemctl --version: (2.0123594s)
I0524 18:59:01.567044    5184 build_images.go:151] Building image from path: C:\Users\jenkins.minikube1\AppData\Local\Temp\build.1832732406.tar
I0524 18:59:01.577365    5184 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0524 18:59:01.608099    5184 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1832732406.tar
I0524 18:59:01.616514    5184 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1832732406.tar: stat -c "%s %y" /var/lib/minikube/build/build.1832732406.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1832732406.tar': No such file or directory
I0524 18:59:01.616514    5184 ssh_runner.go:362] scp C:\Users\jenkins.minikube1\AppData\Local\Temp\build.1832732406.tar --> /var/lib/minikube/build/build.1832732406.tar (3072 bytes)
I0524 18:59:01.674821    5184 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1832732406
I0524 18:59:01.701508    5184 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1832732406 -xf /var/lib/minikube/build/build.1832732406.tar
I0524 18:59:01.718613    5184 docker.go:336] Building image: /var/lib/minikube/build/build.1832732406
I0524 18:59:01.724842    5184 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-644800 /var/lib/minikube/build/build.1832732406
I0524 18:59:05.538905    5184 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-644800 /var/lib/minikube/build/build.1832732406: (3.8139732s)
I0524 18:59:05.550281    5184 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1832732406
I0524 18:59:05.579557    5184 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1832732406.tar
I0524 18:59:05.599038    5184 build_images.go:207] Built localhost/my-image:functional-644800 from C:\Users\jenkins.minikube1\AppData\Local\Temp\build.1832732406.tar
I0524 18:59:05.599202    5184 build_images.go:123] succeeded building to: functional-644800
I0524 18:59:05.599202    5184 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image ls: (3.1419233s)
E0524 19:00:08.990520    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (14.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.6420349s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-644800
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (16.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image load --daemon gcr.io/google-containers/addon-resizer:functional-644800 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image load --daemon gcr.io/google-containers/addon-resizer:functional-644800 --alsologtostderr: (13.2753491s)
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image ls: (3.5289709s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (16.80s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 version --short
--- PASS: TestFunctional/parallel/Version/short (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 version -o=json --components
functional_test.go:2265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 version -o=json --components: (3.8137044s)
--- PASS: TestFunctional/parallel/Version/components (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (4.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1273: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.5687198s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (4.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-644800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-644800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-644800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11384: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 8352: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-644800 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1308: (dbg) Done: out/minikube-windows-amd64.exe profile list: (3.8172557s)
functional_test.go:1313: Took "3.8174335s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1327: Took "250.02ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-644800 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-644800 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b9d9e41c-86d3-48cd-9990-7e3243e1a8c1] Pending
helpers_test.go:344: "nginx-svc" [b9d9e41c-86d3-48cd-9990-7e3243e1a8c1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b9d9e41c-86d3-48cd-9990-7e3243e1a8c1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.0325807s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1359: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (3.6360457s)
functional_test.go:1364: Took "3.6361513s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1377: Took "262.6947ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (10.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image load --daemon gcr.io/google-containers/addon-resizer:functional-644800 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image load --daemon gcr.io/google-containers/addon-resizer:functional-644800 --alsologtostderr: (7.3384154s)
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image ls: (3.1595755s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (10.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (16.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.4561684s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-644800
functional_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image load --daemon gcr.io/google-containers/addon-resizer:functional-644800 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image load --daemon gcr.io/google-containers/addon-resizer:functional-644800 --alsologtostderr: (10.7882278s)
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image ls: (3.2388386s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (16.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-644800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 912: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-644800 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-644800 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-ql9x9" [6f914a96-ffe4-4122-b0da-31ed6012f913] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-ql9x9" [6f914a96-ffe4-4122-b0da-31ed6012f913] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.0369239s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image save gcr.io/google-containers/addon-resizer:functional-644800 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:378: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image save gcr.io/google-containers/addon-resizer:functional-644800 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (5.2295857s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (6.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image rm gcr.io/google-containers/addon-resizer:functional-644800 --alsologtostderr
functional_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image rm gcr.io/google-containers/addon-resizer:functional-644800 --alsologtostderr: (3.2980728s)
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image ls: (3.1280036s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (6.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (5.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 service list
functional_test.go:1457: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 service list: (5.9055572s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (5.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (5.4062639s)
functional_test.go:446: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image ls
functional_test.go:446: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image ls: (3.3408899s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (6.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 service list -o json
functional_test.go:1487: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 service list -o json: (6.1072214s)
functional_test.go:1492: Took "6.1075133s" to run "out/minikube-windows-amd64.exe -p functional-644800 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (6.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-644800
functional_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 image save --daemon gcr.io/google-containers/addon-resizer:functional-644800 --alsologtostderr
functional_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 image save --daemon gcr.io/google-containers/addon-resizer:functional-644800 --alsologtostderr: (8.1760604s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-644800
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (7.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 service --namespace=default --https --url hello-node
functional_test.go:1507: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 service --namespace=default --https --url hello-node: (7.7094675s)
functional_test.go:1520: found endpoint: https://172.27.143.207:31681
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (7.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 service hello-node --url --format={{.IP}}
functional_test.go:1538: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 service hello-node --url --format={{.IP}}: (7.5286875s)
--- PASS: TestFunctional/parallel/ServiceCmd/Format (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (7.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-644800 service hello-node --url
functional_test.go:1557: (dbg) Done: out/minikube-windows-amd64.exe -p functional-644800 service hello-node --url: (7.3937392s)
functional_test.go:1563: found endpoint for hello-node: http://172.27.143.207:31681
--- PASS: TestFunctional/parallel/ServiceCmd/URL (7.39s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.65s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-644800
--- PASS: TestFunctional/delete_addon-resizer_images (0.65s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.19s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-644800
--- PASS: TestFunctional/delete_my-image_image (0.19s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-644800
--- PASS: TestFunctional/delete_minikube_cached_images (0.20s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (121.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-697800 --driver=hyperv
E0524 19:05:09.003139    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 19:06:32.173222    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-697800 --driver=hyperv: (2m1.6100208s)
--- PASS: TestImageBuild/serial/Setup (121.61s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (5.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-697800
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-697800: (5.1065725s)
--- PASS: TestImageBuild/serial/NormalBuild (5.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (6.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-697800
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-697800: (6.0050562s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (6.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (3.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-697800
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-697800: (3.5469006s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (3.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (3.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-697800
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-697800: (3.3177582s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (3.32s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (159.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-611500 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E0524 19:07:27.025562    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
E0524 19:07:37.277030    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
E0524 19:07:57.770119    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
E0524 19:08:38.737249    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
E0524 19:10:00.665146    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-611500 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: (2m39.5276474s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (159.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (26.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 addons enable ingress --alsologtostderr -v=5
E0524 19:10:08.999231    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 addons enable ingress --alsologtostderr -v=5: (26.4396836s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (26.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (3.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 addons enable ingress-dns --alsologtostderr -v=5: (3.366191s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (3.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (50.07s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-611500 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-611500 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.1033182s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-611500 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-611500 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [e912acec-39f0-47bb-8a0e-bcb49173eca6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [e912acec-39f0-47bb-8a0e-bcb49173eca6] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 15.0828285s
addons_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (3.838278s)
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-611500 replace --force -f testdata\ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 ip
addons_test.go:267: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 ip: (1.02305s)
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 172.27.131.35
addons_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 addons disable ingress-dns --alsologtostderr -v=1: (9.832533s)
addons_test.go:287: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-611500 addons disable ingress --alsologtostderr -v=1: (10.4791832s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (50.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (132.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-382000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0524 19:12:44.513143    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-382000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (2m12.8926652s)
--- PASS: TestJSONOutput/start/Command (132.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (3.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-382000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-382000 --output=json --user=testUser: (3.6941055s)
--- PASS: TestJSONOutput/pause/Command (3.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (3.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-382000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-382000 --output=json --user=testUser: (3.5908835s)
--- PASS: TestJSONOutput/unpause/Command (3.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (24.46s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-382000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-382000 --output=json --user=testUser: (24.4584023s)
--- PASS: TestJSONOutput/stop/Command (24.46s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.59s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-483400 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-483400 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (264.2147ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ccb91dff-44c3-49e5-a14e-196544275a57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-483400] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4f1058a-ac4c-45c3-9c7e-56e31aeeaf87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube1\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"35f7ed04-e898-4064-85ac-caf5e7038f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"19bee5d5-7b21-4c96-9c19-df422029b27f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube1\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"5f5f5dff-7696-4155-a5b9-b46f969abe1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16573"}}
	{"specversion":"1.0","id":"ea433d40-f4fc-4ee4-8e96-cf13006c366c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c825987-c2b3-4533-9fae-692804a9c43b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-483400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-483400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-483400: (1.3206428s)
--- PASS: TestErrorJSONOutput (1.59s)

                                                
                                    
x
+
TestMainNoArgs (0.23s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.23s)

                                                
                                    
x
+
TestMinikubeProfile (320.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-060800 --driver=hyperv
E0524 19:15:31.372510    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:31.387531    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:31.402760    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:31.434340    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:31.481044    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:31.576806    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:31.749193    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:32.079789    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:32.728493    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:34.023417    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:36.596466    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:41.724579    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:15:51.969186    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:16:12.462404    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:16:53.429350    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:17:16.685721    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-060800 --driver=hyperv: (2m0.4763211s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-060800 --driver=hyperv
E0524 19:18:15.355651    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-060800 --driver=hyperv: (2m3.3621299s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-060800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (6.2890811s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-060800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (6.2869234s)
helpers_test.go:175: Cleaning up "second-060800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-060800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-060800: (31.9736313s)
helpers_test.go:175: Cleaning up "first-060800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-060800
E0524 19:20:09.002635    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 19:20:31.362207    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-060800: (31.3170397s)
--- PASS: TestMinikubeProfile (320.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (80.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-742700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0524 19:20:59.210724    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-742700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (1m19.9294214s)
--- PASS: TestMountStart/serial/StartWithMountFirst (80.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (3.89s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-742700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-742700 ssh -- ls /minikube-host: (3.8858957s)
--- PASS: TestMountStart/serial/VerifyMountFirst (3.89s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (81.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-742700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0524 19:22:16.685290    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
E0524 19:23:12.181143    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-742700 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (1m20.0071103s)
--- PASS: TestMountStart/serial/StartWithMountSecond (81.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (3.78s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-742700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-742700 ssh -- ls /minikube-host: (3.784228s)
--- PASS: TestMountStart/serial/VerifyMountSecond (3.78s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (13.32s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-742700 --alsologtostderr -v=5
E0524 19:23:39.877552    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-742700 --alsologtostderr -v=5: (13.3179252s)
--- PASS: TestMountStart/serial/DeleteFirst (13.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (3.95s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-742700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-742700 ssh -- ls /minikube-host: (3.9519056s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (3.95s)

                                                
                                    
x
+
TestMountStart/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-742700
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-742700: (11.8742951s)
--- PASS: TestMountStart/serial/Stop (11.87s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (63.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-742700
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-742700: (1m2.7493405s)
--- PASS: TestMountStart/serial/RestartStopped (63.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (3.91s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-742700 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-742700 ssh -- ls /minikube-host: (3.9108966s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (3.91s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (270.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-237000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0524 19:25:31.364576    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:27:16.693940    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
multinode_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-237000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (4m20.7723067s)
multinode_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr
multinode_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr: (9.8216305s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (270.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- rollout status deployment/busybox: (3.464203s)
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-9t5bp -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-9t5bp -- nslookup kubernetes.io: (1.7653744s)
multinode_test.go:524: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-tdzj2 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-9t5bp -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-tdzj2 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-9t5bp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-237000 -- exec busybox-67b7f59bb-tdzj2 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.47s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (134.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-237000 -v 3 --alsologtostderr
E0524 19:31:54.579907    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:32:16.679249    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
multinode_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-237000 -v 3 --alsologtostderr: (2m0.0467244s)
multinode_test.go:116: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr
multinode_test.go:116: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr: (14.5828609s)
--- PASS: TestMultiNode/serial/AddNode (134.63s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (3.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.2742149s)
--- PASS: TestMultiNode/serial/ProfileList (3.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (146.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 status --output json --alsologtostderr: (14.5178984s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp testdata\cp-test.txt multinode-237000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp testdata\cp-test.txt multinode-237000:/home/docker/cp-test.txt: (3.7999701s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test.txt": (3.8473369s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1001856804\001\cp-test_multinode-237000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1001856804\001\cp-test_multinode-237000.txt: (3.7392905s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test.txt": (3.8598912s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000:/home/docker/cp-test.txt multinode-237000-m02:/home/docker/cp-test_multinode-237000_multinode-237000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000:/home/docker/cp-test.txt multinode-237000-m02:/home/docker/cp-test_multinode-237000_multinode-237000-m02.txt: (6.542965s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test.txt": (3.7874746s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test_multinode-237000_multinode-237000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test_multinode-237000_multinode-237000-m02.txt": (3.8395184s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000:/home/docker/cp-test.txt multinode-237000-m03:/home/docker/cp-test_multinode-237000_multinode-237000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000:/home/docker/cp-test.txt multinode-237000-m03:/home/docker/cp-test_multinode-237000_multinode-237000-m03.txt: (6.8408891s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test.txt": (3.8516658s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test_multinode-237000_multinode-237000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test_multinode-237000_multinode-237000-m03.txt": (3.8763204s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp testdata\cp-test.txt multinode-237000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp testdata\cp-test.txt multinode-237000-m02:/home/docker/cp-test.txt: (3.8301932s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test.txt": (3.8276521s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1001856804\001\cp-test_multinode-237000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1001856804\001\cp-test_multinode-237000-m02.txt: (3.8908251s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test.txt": (3.8789845s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m02:/home/docker/cp-test.txt multinode-237000:/home/docker/cp-test_multinode-237000-m02_multinode-237000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m02:/home/docker/cp-test.txt multinode-237000:/home/docker/cp-test_multinode-237000-m02_multinode-237000.txt: (6.5681349s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test.txt": (3.896071s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test_multinode-237000-m02_multinode-237000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test_multinode-237000-m02_multinode-237000.txt": (3.81805s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m02:/home/docker/cp-test.txt multinode-237000-m03:/home/docker/cp-test_multinode-237000-m02_multinode-237000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m02:/home/docker/cp-test.txt multinode-237000-m03:/home/docker/cp-test_multinode-237000-m02_multinode-237000-m03.txt: (6.7064966s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test.txt": (3.7131093s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test_multinode-237000-m02_multinode-237000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test_multinode-237000-m02_multinode-237000-m03.txt": (3.8383703s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp testdata\cp-test.txt multinode-237000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp testdata\cp-test.txt multinode-237000-m03:/home/docker/cp-test.txt: (4.0145018s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test.txt": (3.9128079s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1001856804\001\cp-test_multinode-237000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube1\AppData\Local\Temp\TestMultiNodeserialCopyFile1001856804\001\cp-test_multinode-237000-m03.txt: (3.8921163s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test.txt": (3.8063774s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m03:/home/docker/cp-test.txt multinode-237000:/home/docker/cp-test_multinode-237000-m03_multinode-237000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m03:/home/docker/cp-test.txt multinode-237000:/home/docker/cp-test_multinode-237000-m03_multinode-237000.txt: (6.6660203s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test.txt": (3.7985248s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test_multinode-237000-m03_multinode-237000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000 "sudo cat /home/docker/cp-test_multinode-237000-m03_multinode-237000.txt": (3.8104458s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m03:/home/docker/cp-test.txt multinode-237000-m02:/home/docker/cp-test_multinode-237000-m03_multinode-237000-m02.txt
E0524 19:35:08.989602    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 cp multinode-237000-m03:/home/docker/cp-test.txt multinode-237000-m02:/home/docker/cp-test_multinode-237000-m03_multinode-237000-m02.txt: (6.5597884s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m03 "sudo cat /home/docker/cp-test.txt": (4.0181337s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test_multinode-237000-m03_multinode-237000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 ssh -n multinode-237000-m02 "sudo cat /home/docker/cp-test_multinode-237000-m03_multinode-237000-m02.txt": (3.8165339s)
--- PASS: TestMultiNode/serial/CopyFile (146.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (33.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 node stop m03
E0524 19:35:31.366611    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
multinode_test.go:210: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 node stop m03: (11.7720363s)
multinode_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-237000 status: exit status 7 (10.7099768s)

                                                
                                                
-- stdout --
	multinode-237000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-237000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-237000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr: exit status 7 (10.5310389s)

                                                
                                                
-- stdout --
	multinode-237000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-237000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-237000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 19:35:44.585048    9280 out.go:296] Setting OutFile to fd 856 ...
	I0524 19:35:44.653874    9280 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:35:44.653874    9280 out.go:309] Setting ErrFile to fd 796...
	I0524 19:35:44.653874    9280 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:35:44.667154    9280 out.go:303] Setting JSON to false
	I0524 19:35:44.667154    9280 mustload.go:65] Loading cluster: multinode-237000
	I0524 19:35:44.667154    9280 notify.go:220] Checking for updates...
	I0524 19:35:44.668341    9280 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:35:44.668341    9280 status.go:255] checking status of multinode-237000 ...
	I0524 19:35:44.669130    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:35:45.486790    9280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:35:45.486858    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:45.486858    9280 status.go:330] multinode-237000 host status = "Running" (err=<nil>)
	I0524 19:35:45.486858    9280 host.go:66] Checking if "multinode-237000" exists ...
	I0524 19:35:45.487664    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:35:46.292309    9280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:35:46.292309    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:46.292455    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:35:47.432456    9280 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:35:47.432456    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:47.432541    9280 host.go:66] Checking if "multinode-237000" exists ...
	I0524 19:35:47.442668    9280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0524 19:35:47.442668    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:35:48.200938    9280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:35:48.200938    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:48.201154    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000 ).networkadapters[0]).ipaddresses[0]
	I0524 19:35:49.307850    9280 main.go:141] libmachine: [stdout =====>] : 172.27.130.107
	
	I0524 19:35:49.307935    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:49.307935    9280 sshutil.go:53] new ssh client: &{IP:172.27.130.107 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000\id_rsa Username:docker}
	I0524 19:35:49.432199    9280 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.9895321s)
	I0524 19:35:49.443809    9280 ssh_runner.go:195] Run: systemctl --version
	I0524 19:35:49.467448    9280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:35:49.492595    9280 kubeconfig.go:92] found "multinode-237000" server: "https://172.27.130.107:8443"
	I0524 19:35:49.492595    9280 api_server.go:166] Checking apiserver status ...
	I0524 19:35:49.504234    9280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0524 19:35:49.536351    9280 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1952/cgroup
	I0524 19:35:49.553712    9280 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/pod9df549a886a8b8feca4108c5fa576f3b/30b43ae6055b8e52934aa736ff06f16afb2a355cc7363194ecbc4d3d7c73baff"
	I0524 19:35:49.564173    9280 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod9df549a886a8b8feca4108c5fa576f3b/30b43ae6055b8e52934aa736ff06f16afb2a355cc7363194ecbc4d3d7c73baff/freezer.state
	I0524 19:35:49.581287    9280 api_server.go:204] freezer state: "THAWED"
	I0524 19:35:49.582393    9280 api_server.go:253] Checking apiserver healthz at https://172.27.130.107:8443/healthz ...
	I0524 19:35:49.595770    9280 api_server.go:279] https://172.27.130.107:8443/healthz returned 200:
	ok
	I0524 19:35:49.596154    9280 status.go:421] multinode-237000 apiserver status = Running (err=<nil>)
	I0524 19:35:49.596154    9280 status.go:257] multinode-237000 status: &{Name:multinode-237000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0524 19:35:49.596154    9280 status.go:255] checking status of multinode-237000-m02 ...
	I0524 19:35:49.597163    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:35:50.375267    9280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:35:50.375267    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:50.375381    9280 status.go:330] multinode-237000-m02 host status = "Running" (err=<nil>)
	I0524 19:35:50.375381    9280 host.go:66] Checking if "multinode-237000-m02" exists ...
	I0524 19:35:50.375600    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:35:51.133150    9280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:35:51.133150    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:51.133465    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:35:52.221619    9280 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:35:52.221619    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:52.221619    9280 host.go:66] Checking if "multinode-237000-m02" exists ...
	I0524 19:35:52.233774    9280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0524 19:35:52.233774    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:35:52.983264    9280 main.go:141] libmachine: [stdout =====>] : Running
	
	I0524 19:35:52.983264    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:52.983264    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-237000-m02 ).networkadapters[0]).ipaddresses[0]
	I0524 19:35:54.086460    9280 main.go:141] libmachine: [stdout =====>] : 172.27.128.127
	
	I0524 19:35:54.086695    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:54.087070    9280 sshutil.go:53] new ssh client: &{IP:172.27.128.127 Port:22 SSHKeyPath:C:\Users\jenkins.minikube1\minikube-integration\.minikube\machines\multinode-237000-m02\id_rsa Username:docker}
	I0524 19:35:54.194229    9280 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.9603519s)
	I0524 19:35:54.205433    9280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0524 19:35:54.227804    9280 status.go:257] multinode-237000-m02 status: &{Name:multinode-237000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0524 19:35:54.227873    9280 status.go:255] checking status of multinode-237000-m03 ...
	I0524 19:35:54.228849    9280 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m03 ).state
	I0524 19:35:54.957057    9280 main.go:141] libmachine: [stdout =====>] : Off
	
	I0524 19:35:54.957259    9280 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:35:54.957317    9280 status.go:330] multinode-237000-m03 host status = "Stopped" (err=<nil>)
	I0524 19:35:54.957317    9280 status.go:343] host is not running, skipping remaining checks
	I0524 19:35:54.957317    9280 status.go:257] multinode-237000-m03 status: &{Name:multinode-237000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (33.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (95.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 node start m03 --alsologtostderr: (1m20.4973978s)
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 status
E0524 19:37:16.687946    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
multinode_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 status: (14.4040754s)
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (95.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (30.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 node delete m03: (20.3572964s)
multinode_test.go:400: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr
multinode_test.go:400: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr: (9.4588345s)
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (30.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (49.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 stop
multinode_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 stop: (45.693791s)
multinode_test.go:320: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-237000 status: exit status 7 (1.6845356s)

                                                
                                                
-- stdout --
	multinode-237000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-237000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr: exit status 7 (1.6977488s)

                                                
                                                
-- stdout --
	multinode-237000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-237000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0524 19:44:46.102135    9500 out.go:296] Setting OutFile to fd 640 ...
	I0524 19:44:46.165132    9500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:44:46.165132    9500 out.go:309] Setting ErrFile to fd 764...
	I0524 19:44:46.165132    9500 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0524 19:44:46.180922    9500 out.go:303] Setting JSON to false
	I0524 19:44:46.180922    9500 mustload.go:65] Loading cluster: multinode-237000
	I0524 19:44:46.181452    9500 notify.go:220] Checking for updates...
	I0524 19:44:46.182120    9500 config.go:182] Loaded profile config "multinode-237000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0524 19:44:46.182120    9500 status.go:255] checking status of multinode-237000 ...
	I0524 19:44:46.183263    9500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000 ).state
	I0524 19:44:46.934406    9500 main.go:141] libmachine: [stdout =====>] : Off
	
	I0524 19:44:46.934406    9500 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:44:46.934406    9500 status.go:330] multinode-237000 host status = "Stopped" (err=<nil>)
	I0524 19:44:46.934406    9500 status.go:343] host is not running, skipping remaining checks
	I0524 19:44:46.934406    9500 status.go:257] multinode-237000 status: &{Name:multinode-237000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0524 19:44:46.934406    9500 status.go:255] checking status of multinode-237000-m02 ...
	I0524 19:44:46.935121    9500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-237000-m02 ).state
	I0524 19:44:47.654333    9500 main.go:141] libmachine: [stdout =====>] : Off
	
	I0524 19:44:47.654333    9500 main.go:141] libmachine: [stderr =====>] : 
	I0524 19:44:47.654333    9500 status.go:330] multinode-237000-m02 host status = "Stopped" (err=<nil>)
	I0524 19:44:47.654333    9500 status.go:343] host is not running, skipping remaining checks
	I0524 19:44:47.654333    9500 status.go:257] multinode-237000-m02 status: &{Name:multinode-237000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (49.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (196.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-237000 --wait=true -v=8 --alsologtostderr --driver=hyperv
E0524 19:45:08.989835    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 19:45:31.360291    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 19:47:16.686492    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
multinode_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-237000 --wait=true -v=8 --alsologtostderr --driver=hyperv: (3m6.212895s)
multinode_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr
multinode_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-237000 status --alsologtostderr: (9.6460687s)
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (196.38s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (165.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-237000
multinode_test.go:452: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-237000-m02 --driver=hyperv
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-237000-m02 --driver=hyperv: exit status 14 (265.764ms)

                                                
                                                
-- stdout --
	* [multinode-237000-m02] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-237000-m02' is duplicated with machine name 'multinode-237000-m02' in profile 'multinode-237000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-237000-m03 --driver=hyperv
E0524 19:48:34.592989    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
multinode_test.go:460: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-237000-m03 --driver=hyperv: (2m3.7274653s)
multinode_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-237000
E0524 19:50:08.994121    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-237000: exit status 80 (3.8206906s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-237000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-237000-m03 already exists in multinode-237000-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube1\AppData\Local\Temp\minikube_node_17615de98fc431ce4460405c35b285c54151ae7f_44.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-237000-m03
E0524 19:50:31.369244    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-237000-m03: (37.0077848s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (165.03s)

                                                
                                    
x
+
TestPreload (353.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-134100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0524 19:52:16.683953    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-134100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m12.1613885s)
preload_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-134100 -- docker pull gcr.io/k8s-minikube/busybox
E0524 19:55:08.993511    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
preload_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-134100 -- docker pull gcr.io/k8s-minikube/busybox: (4.8273642s)
preload_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-134100
E0524 19:55:31.370307    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
preload_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-134100: (23.2266511s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-134100 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0524 19:56:32.184608    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 19:56:59.897077    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
E0524 19:57:16.680369    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-134100 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (1m44.5031331s)
preload_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p test-preload-134100 -- docker images
preload_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe ssh -p test-preload-134100 -- docker images: (3.8134618s)
helpers_test.go:175: Cleaning up "test-preload-134100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-134100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-134100: (24.951973s)
--- PASS: TestPreload (353.49s)

                                                
                                    
x
+
TestScheduledStopWindows (217.32s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-174400 --memory=2048 --driver=hyperv
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-174400 --memory=2048 --driver=hyperv: (1m57.6997017s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-174400 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-174400 --schedule 5m: (4.594849s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-174400 -n scheduled-stop-174400
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-174400 -n scheduled-stop-174400: (5.0639341s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-174400 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-174400 -- sudo systemctl show minikube-scheduled-stop --no-page: (3.7150457s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-174400 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-174400 --schedule 5s: (4.7840342s)
E0524 20:00:08.990710    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 20:00:31.366069    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-174400
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-174400: exit status 7 (972.7365ms)

                                                
                                                
-- stdout --
	scheduled-stop-174400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-174400 -n scheduled-stop-174400
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-174400 -n scheduled-stop-174400: exit status 7 (982.8015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-174400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-174400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-174400: (19.4916299s)
--- PASS: TestScheduledStopWindows (217.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (773.78s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-044700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-044700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: (4m42.2514581s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-044700
version_upgrade_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-044700: (26.3623381s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-044700 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-044700 status --format={{.Host}}: exit status 7 (1.0440786s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-044700 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=hyperv
E0524 20:13:12.188937    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
version_upgrade_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-044700 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=hyperv: (3m42.7599292s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-044700 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-044700 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-044700 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (283.081ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-044700] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-044700
	    minikube start -p kubernetes-upgrade-044700 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0447002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.2, by running:
	    
	    minikube start -p kubernetes-upgrade-044700 --kubernetes-version=v1.27.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-044700 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=hyperv
E0524 20:17:16.680508    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
version_upgrade_test.go:287: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-044700 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=hyperv: (3m23.8528707s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-044700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-044700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-044700: (37.0069909s)
--- PASS: TestKubernetesUpgrade (773.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-893100 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-893100 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (337.9015ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-893100] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.2965 Build 19045.2965
	  - KUBECONFIG=C:\Users\jenkins.minikube1\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube1\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16573
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.34s)

                                                
                                    
x
+
TestPause/serial/Start (144.25s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-893100 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-893100 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (2m24.2487977s)
--- PASS: TestPause/serial/Start (144.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (6.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-998200
version_upgrade_test.go:218: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-998200: (6.9727168s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (6.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (346.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-276900 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-276900 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0: (5m46.8868202s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (346.89s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (218.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-079300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-079300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.27.2: (3m38.0920137s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (218.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (180.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-125300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.27.2
E0524 20:20:31.366432    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-125300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.27.2: (3m0.7703213s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (180.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (240.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-549900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-549900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.27.2: (4m0.7499421s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (240.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-276900 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d7be3aaa-6323-436e-8b45-4f0349b0b648] Pending
helpers_test.go:344: "busybox" [d7be3aaa-6323-436e-8b45-4f0349b0b648] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d7be3aaa-6323-436e-8b45-4f0349b0b648] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.2108496s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-276900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-276900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-276900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.0528192s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-276900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (25.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-276900 --alsologtostderr -v=3
E0524 20:21:54.602560    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-276900 --alsologtostderr -v=3: (25.9651571s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (25.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (2.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-276900 -n old-k8s-version-276900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-276900 -n old-k8s-version-276900: exit status 7 (1.0400887s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-276900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-276900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.5791569s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (2.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (571.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-276900 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0
E0524 20:22:16.691775    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-276900 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0: (9m26.2443888s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-276900 -n old-k8s-version-276900
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-276900 -n old-k8s-version-276900: (5.564841s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (571.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-079300 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [51f9b61f-f957-4889-9e43-95d0b4f054a0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [51f9b61f-f957-4889-9e43-95d0b4f054a0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.074854s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-079300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-079300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-079300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.4427382s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-079300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (26.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-079300 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-079300 --alsologtostderr -v=3: (26.7303371s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (26.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-079300 -n no-preload-079300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-079300 -n no-preload-079300: exit status 7 (1.094787s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-079300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-079300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.4807563s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (2.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (24.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-125300 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) Done: kubectl --context embed-certs-125300 create -f testdata\busybox.yaml: (1.1303774s)
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [007af071-e33c-43f5-88d5-16c7d50e44ea] Pending
helpers_test.go:344: "busybox" [007af071-e33c-43f5-88d5-16c7d50e44ea] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [007af071-e33c-43f5-88d5-16c7d50e44ea] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 22.7863009s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-125300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (24.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (450.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-079300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-079300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.27.2: (7m25.50306s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-079300 -n no-preload-079300
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-079300 -n no-preload-079300: (5.2617761s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (450.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-125300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-125300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (10.1336322s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-125300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (10.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (26.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-125300 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-125300 --alsologtostderr -v=3: (26.3733038s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (26.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-125300 -n embed-certs-125300
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-125300 -n embed-certs-125300: exit status 7 (1.0686728s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-125300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-125300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.8988607s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (2.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (423.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-125300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-125300 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.27.2: (6m57.8088833s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-125300 -n embed-certs-125300
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-125300 -n embed-certs-125300: (5.6319924s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (423.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (19.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-549900 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f3c433f2-8d9d-49c3-a060-8844bba76e06] Pending
helpers_test.go:344: "busybox" [f3c433f2-8d9d-49c3-a060-8844bba76e06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f3c433f2-8d9d-49c3-a060-8844bba76e06] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 18.0721843s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-549900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (19.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (8.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-549900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0524 20:25:08.987722    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-549900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (8.3711747s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-549900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (8.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (25.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-549900 --alsologtostderr -v=3
E0524 20:25:31.366347    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-549900 --alsologtostderr -v=3: (25.8498533s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (25.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900: exit status 7 (1.0569126s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-549900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-549900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.5937219s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (2.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (415.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-549900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.27.2
E0524 20:27:16.682918    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
E0524 20:29:52.189984    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 20:30:08.986379    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 20:30:19.913229    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
E0524 20:30:31.363188    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-549900 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.27.2: (6m49.551963s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900: (5.6928095s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (415.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-dvs69" [32595902-7793-4a04-8370-836646008db6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0310401s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-dvs69" [32595902-7793-4a04-8370-836646008db6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.03221s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-079300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (4.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-079300 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-079300 "sudo crictl images -o json": (4.0342371s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (4.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (29.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-079300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-079300 --alsologtostderr -v=1: (3.9560268s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-079300 -n no-preload-079300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-079300 -n no-preload-079300: exit status 2 (5.2981477s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-079300 -n no-preload-079300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-079300 -n no-preload-079300: exit status 2 (5.2333205s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-079300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-079300 --alsologtostderr -v=1: (4.0226046s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-079300 -n no-preload-079300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-079300 -n no-preload-079300: (5.6300773s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-079300 -n no-preload-079300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-079300 -n no-preload-079300: (5.1337144s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (29.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xj49n" [4c846bb7-7c97-411a-90fe-c065d9b694ff] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0344136s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-w7sk9" [6a85c487-0651-4589-9e1f-edfc554d6901] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0591981s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xj49n" [4c846bb7-7c97-411a-90fe-c065d9b694ff] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0246607s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-125300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-w7sk9" [6a85c487-0651-4589-9e1f-edfc554d6901] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0261722s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-276900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (4.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-125300 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-125300 "sudo crictl images -o json": (4.237657s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (4.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (4.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-276900 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-276900 "sudo crictl images -o json": (4.2937762s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (4.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (30.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-125300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-125300 --alsologtostderr -v=1: (4.0678425s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-125300 -n embed-certs-125300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-125300 -n embed-certs-125300: exit status 2 (5.1927006s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-125300 -n embed-certs-125300
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-125300 -n embed-certs-125300: exit status 2 (5.226668s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-125300 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-125300 --alsologtostderr -v=1: (5.0558023s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-125300 -n embed-certs-125300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-125300 -n embed-certs-125300: (5.4810903s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-125300 -n embed-certs-125300
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-125300 -n embed-certs-125300: (5.6469027s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (30.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (31.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-276900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-276900 --alsologtostderr -v=1: (3.9617823s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-276900 -n old-k8s-version-276900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-276900 -n old-k8s-version-276900: exit status 2 (5.2720147s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-276900 -n old-k8s-version-276900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-276900 -n old-k8s-version-276900: exit status 2 (5.5243179s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-276900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-276900 --alsologtostderr -v=1: (5.6139823s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-276900 -n old-k8s-version-276900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-276900 -n old-k8s-version-276900: (5.8037074s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-276900 -n old-k8s-version-276900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-276900 -n old-k8s-version-276900: (5.6274383s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (31.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (148.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-173700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.27.2
E0524 20:32:16.680654    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-173700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.27.2: (2m28.4595815s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (148.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nrfz7" [4039c195-976e-48a5-9e1e-4547f2414fde] Running
E0524 20:32:42.151296    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:32:42.166727    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:32:42.182784    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:32:42.214953    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:32:42.262195    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:32:42.357609    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:32:42.531587    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:32:42.863072    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:32:43.511313    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:32:44.807621    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0723554s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-nrfz7" [4039c195-976e-48a5-9e1e-4547f2414fde] Running
E0524 20:32:47.376337    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.066624s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-549900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (4.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-549900 "sudo crictl images -o json"
E0524 20:32:52.507250    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-549900 "sudo crictl images -o json": (4.4879052s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (4.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (32.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-549900 --alsologtostderr -v=1
E0524 20:33:02.754584    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-549900 --alsologtostderr -v=1: (6.2161463s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900: exit status 2 (5.4605671s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900: exit status 2 (5.1934445s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-549900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-549900 --alsologtostderr -v=1: (4.8033294s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900: (5.3359964s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-549900 -n default-k8s-diff-port-549900: (5.1297825s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (32.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (180.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperv
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperv: (3m0.1699932s)
--- PASS: TestNetworkPlugins/group/auto/Start (180.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (258.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv
E0524 20:33:23.237241    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv: (4m18.424115s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (258.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (364.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv: (6m4.2477739s)
--- PASS: TestNetworkPlugins/group/calico/Start (364.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-173700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-173700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (4.3184266s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (4.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (31.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-173700 --alsologtostderr -v=3
E0524 20:34:50.149525    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:34:50.164380    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:34:50.179794    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:34:50.210892    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:34:50.257947    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:34:50.352308    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:34:50.523271    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:34:50.851268    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:34:51.500051    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:34:52.796643    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:34:55.366053    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:35:00.489320    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:35:08.997146    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 20:35:10.730665    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-173700 --alsologtostderr -v=3: (31.533616s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (31.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-173700 -n newest-cni-173700
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-173700 -n newest-cni-173700: exit status 7 (1.0375735s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-173700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-173700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.6896474s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (2.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (259.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-173700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.27.2
E0524 20:35:26.132161    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:35:31.213733    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:35:31.355588    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-173700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.27.2: (4m14.1562469s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-173700 -n newest-cni-173700
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-173700 -n newest-cni-173700: (5.5769781s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (259.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (3.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-210100 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-210100 "pgrep -a kubelet": (3.9708458s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (3.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-210100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-b2m7k" [1d4bd73a-692a-49e7-8ae1-88d11415a9a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0524 20:36:12.220329    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
E0524 20:36:20.383583    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
E0524 20:36:20.399360    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
E0524 20:36:20.415259    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
E0524 20:36:20.447545    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-b2m7k" [1d4bd73a-692a-49e7-8ae1-88d11415a9a6] Running
E0524 20:36:20.494576    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
E0524 20:36:20.588407    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
E0524 20:36:20.761422    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
E0524 20:36:21.091474    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
E0524 20:36:21.739816    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
E0524 20:36:23.029591    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.016473s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-210100 exec deployment/netcat -- nslookup kubernetes.default
E0524 20:36:25.593525    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/auto/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qrlzq" [cfea664f-ef24-42f2-8a74-e57fcdd374d9] Running
E0524 20:37:42.163146    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:37:42.555978    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.0378172s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (4.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-210100 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-210100 "pgrep -a kubelet": (4.1317719s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (4.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (27.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-210100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-n8xrm" [565de157-55a8-47e7-aca5-d59edbb2464f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-n8xrm" [565de157-55a8-47e7-aca5-d59edbb2464f] Running
E0524 20:38:09.989126    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 27.0340919s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (27.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-210100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (4.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-173700 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-173700 "sudo crictl images -o json": (4.6642152s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (4.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (166.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv: (2m46.3998336s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (166.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (33.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-173700 --alsologtostderr -v=1
E0524 20:39:50.143632    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-173700 --alsologtostderr -v=1: (5.643347s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-173700 -n newest-cni-173700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-173700 -n newest-cni-173700: exit status 2 (5.5631859s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-173700 -n newest-cni-173700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-173700 -n newest-cni-173700: exit status 2 (5.4599892s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-173700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-173700 --alsologtostderr -v=1: (4.0539155s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-173700 -n newest-cni-173700
E0524 20:40:08.996125    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-173700 -n newest-cni-173700: (6.5558566s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-173700 -n newest-cni-173700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-173700 -n newest-cni-173700: (6.2079175s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (33.49s)
E0524 20:49:03.121025    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-210100\client.crt: The system cannot find the path specified.
E0524 20:49:05.352331    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:49:13.369999    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-210100\client.crt: The system cannot find the path specified.
E0524 20:49:33.863541    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\false-210100\client.crt: The system cannot find the path specified.

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-s6h9x" [134c709f-53a2-49ec-b299-b4c31abc50ef] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.059784s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (4.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-210100 "pgrep -a kubelet"
E0524 20:40:17.994285    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-210100 "pgrep -a kubelet": (4.9739703s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (4.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (16.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-210100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-hcmt4" [b5fd4407-1c0f-4791-921b-f74a1bbbe058] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0524 20:40:31.358853    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-hcmt4" [b5fd4407-1c0f-4791-921b-f74a1bbbe058] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 16.0326296s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (16.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-210100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (178.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p false-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv: (2m58.0875182s)
--- PASS: TestNetworkPlugins/group/false/Start (178.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (211.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv
E0524 20:41:48.340882    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\old-k8s-version-276900\client.crt: The system cannot find the path specified.
E0524 20:41:51.395302    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-210100\client.crt: The system cannot find the path specified.
E0524 20:42:16.690358    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv: (3m31.4645198s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (211.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (4.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-210100 "pgrep -a kubelet"
E0524 20:42:32.368543    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-210100\client.crt: The system cannot find the path specified.
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-210100 "pgrep -a kubelet": (4.1959299s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (4.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (17.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-210100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-lzn87" [1aa4c3f9-5a20-453b-820e-5d147f8622b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0524 20:42:38.639049    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:42:38.653703    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:42:38.669108    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:42:38.699768    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:42:38.748032    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:42:38.842126    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:42:39.016884    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:42:39.345368    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:42:39.994705    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:42:41.288876    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:42:42.162999    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\no-preload-079300\client.crt: The system cannot find the path specified.
E0524 20:42:43.849677    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-lzn87" [1aa4c3f9-5a20-453b-820e-5d147f8622b4] Running
E0524 20:42:48.984572    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 17.0988403s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (17.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-210100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (4.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-210100 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-210100 "pgrep -a kubelet": (4.3281029s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (4.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (32.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-210100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:148: (dbg) Done: kubectl --context false-210100 replace --force -f testdata\netcat-deployment.yaml: (1.1271374s)
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-b9x5k" [477e7bf0-dc2b-47a4-a989-2cc421e1ef12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0524 20:43:54.289270    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-210100\client.crt: The system cannot find the path specified.
E0524 20:44:01.714425    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-b9x5k" [477e7bf0-dc2b-47a4-a989-2cc421e1ef12] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 31.0416667s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (32.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (168.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv: (2m48.9820098s)
--- PASS: TestNetworkPlugins/group/flannel/Start (168.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-210100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (1.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
net_test.go:193: (dbg) Done: kubectl --context false-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080": (1.2730653s)
--- PASS: TestNetworkPlugins/group/false/Localhost (1.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (1.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:248: (dbg) Done: kubectl --context false-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": (1.1640344s)
--- PASS: TestNetworkPlugins/group/false/HairPin (1.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (4.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-210100 "pgrep -a kubelet"
E0524 20:45:20.901319    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-210100\client.crt: The system cannot find the path specified.
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-210100 "pgrep -a kubelet": (4.2644741s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (4.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-210100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-sw22k" [fbf79986-8e32-4973-93fd-a5aa4144fd67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0524 20:45:23.646040    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:45:31.142292    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-210100\client.crt: The system cannot find the path specified.
E0524 20:45:31.363614    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-sw22k" [fbf79986-8e32-4973-93fd-a5aa4144fd67] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 17.0284544s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-210100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (197.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperv
E0524 20:46:32.190649    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\addons-830700\client.crt: The system cannot find the path specified.
E0524 20:46:32.602209    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-210100\client.crt: The system cannot find the path specified.
E0524 20:46:38.133731    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\auto-210100\client.crt: The system cannot find the path specified.
E0524 20:46:59.913383    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperv: (3m17.2886844s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (197.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-zk9fm" [fd555c28-3015-4d82-8e0a-b2e3c1c46d6a] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.1713351s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (4.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-210100 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-210100 "pgrep -a kubelet": (4.4317851s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (4.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (15.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-210100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-2z7ms" [6408b8c9-195d-4ef9-b663-de0b27ac1c6e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0524 20:47:16.677914    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\functional-644800\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-2z7ms" [6408b8c9-195d-4ef9-b663-de0b27ac1c6e] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 15.1325439s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (15.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-210100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (158.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperv
E0524 20:47:44.705070    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-210100\client.crt: The system cannot find the path specified.
E0524 20:47:54.537637    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-210100\client.crt: The system cannot find the path specified.
E0524 20:47:54.947486    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-210100\client.crt: The system cannot find the path specified.
E0524 20:48:07.500286    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\kindnet-210100\client.crt: The system cannot find the path specified.
E0524 20:48:15.438353    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\custom-flannel-210100\client.crt: The system cannot find the path specified.
net_test.go:111: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-210100 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperv: (2m38.5282625s)
--- PASS: TestNetworkPlugins/group/bridge/Start (158.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (4.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-210100 "pgrep -a kubelet"
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-210100 "pgrep -a kubelet": (4.1904519s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (4.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (16.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-210100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-c7qzp" [d8d21bc5-8f93-4dd2-b417-f595cc7debf4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0524 20:49:50.141252    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\default-k8s-diff-port-549900\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-c7qzp" [d8d21bc5-8f93-4dd2-b417-f595cc7debf4] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 16.0289971s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (16.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-210100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (4.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-210100 "pgrep -a kubelet"
E0524 20:50:22.507980    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
E0524 20:50:22.517686    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
E0524 20:50:22.538258    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
E0524 20:50:22.569265    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
E0524 20:50:22.609825    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
E0524 20:50:22.695670    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
E0524 20:50:22.870703    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
E0524 20:50:23.202121    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
E0524 20:50:23.849098    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
E0524 20:50:25.141235    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
net_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-210100 "pgrep -a kubelet": (4.1680331s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (4.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-210100 replace --force -f testdata\netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-nbf4t" [c22762e6-8049-4e26-981f-645ee43c45f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0524 20:50:27.716394    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
E0524 20:50:31.355585    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\ingress-addon-legacy-611500\client.crt: The system cannot find the path specified.
E0524 20:50:32.850574    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-nbf4t" [c22762e6-8049-4e26-981f-645ee43c45f9] Running
E0524 20:50:38.392179    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\calico-210100\client.crt: The system cannot find the path specified.
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.0226818s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-210100 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0524 20:50:43.092769    6560 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\enable-default-cni-210100\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-210100 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.40s)

                                                
                                    

Test skip (29/300)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-644800 --alsologtostderr -v=1]
functional_test.go:911: output didn't produce a URL
functional_test.go:905: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-644800 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 8244: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-330000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-330000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-330000: (1.4310302s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (23.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-210100 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-210100" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube1\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Wed, 24 May 2023 20:06:43 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://172.27.136.175:8443
name: pause-893100
contexts:
- context:
cluster: pause-893100
extensions:
- extension:
last-update: Wed, 24 May 2023 20:06:43 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: pause-893100
name: pause-893100
current-context: pause-893100
kind: Config
preferences: {}
users:
- name: pause-893100
user:
client-certificate: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\client.crt
client-key: C:\Users\jenkins.minikube1\minikube-integration\.minikube\profiles\pause-893100\client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: 
* context was not found for specified context: cilium-210100
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-210100" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-210100"

                                                
                                                
----------------------- debugLogs end: cilium-210100 [took: 21.9950057s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-210100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-210100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-210100: (1.3981639s)
--- SKIP: TestNetworkPlugins/group/cilium (23.39s)

                                                
                                    
Copied to clipboard