Test Report: Hyper-V_Windows 16832

                    
                      e16eb42a527afb2085d1ef5055e1e6ba63f4fdb3:2023-07-06:30015
                    
                

Test fail (6/302)

Order failed test Duration
206 TestMultiNode/serial/PingHostFrom2Pods 36.76
212 TestMultiNode/serial/RestartKeepsNodes 311.83
226 TestRunningBinaryUpgrade 462.42
233 TestNoKubernetes/serial/StartWithStopK8s 44.27
255 TestStoppedBinaryUpgrade/Upgrade 357.68
256 TestPause/serial/SecondStartNoReconfiguration 142.61
x
+
TestMultiNode/serial/PingHostFrom2Pods (36.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-47tnt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-47tnt -- sh -c "ping -c 1 172.29.64.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-47tnt -- sh -c "ping -c 1 172.29.64.1": exit status 1 (10.4251521s)

                                                
                                                
-- stdout --
	PING 172.29.64.1 (172.29.64.1): 56 data bytes
	
	--- 172.29.64.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (172.29.64.1) from pod (busybox-67b7f59bb-47tnt): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-qp6pw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-qp6pw -- sh -c "ping -c 1 172.29.64.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-qp6pw -- sh -c "ping -c 1 172.29.64.1": exit status 1 (10.4393059s)

                                                
                                                
-- stdout --
	PING 172.29.64.1 (172.29.64.1): 56 data bytes
	
	--- 172.29.64.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (172.29.64.1) from pod (busybox-67b7f59bb-qp6pw): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-144300 -n multinode-144300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-144300 -n multinode-144300: (4.4857144s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 logs -n 25: (3.8893184s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-141900 ssh -- ls                    | mount-start-2-141900 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:43 UTC | 06 Jul 23 20:43 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-141900                           | mount-start-1-141900 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:43 UTC | 06 Jul 23 20:43 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-141900 ssh -- ls                    | mount-start-2-141900 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:43 UTC | 06 Jul 23 20:43 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-141900                           | mount-start-2-141900 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:43 UTC | 06 Jul 23 20:43 UTC |
	| start   | -p mount-start-2-141900                           | mount-start-2-141900 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:43 UTC | 06 Jul 23 20:44 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-141900 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:44 UTC |                     |
	|         | --profile mount-start-2-141900 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-141900 ssh -- ls                    | mount-start-2-141900 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:44 UTC | 06 Jul 23 20:44 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-141900                           | mount-start-2-141900 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:44 UTC | 06 Jul 23 20:44 UTC |
	| delete  | -p mount-start-1-141900                           | mount-start-1-141900 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:44 UTC | 06 Jul 23 20:44 UTC |
	| start   | -p multinode-144300                               | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:44 UTC | 06 Jul 23 20:48 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- apply -f                   | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- rollout                    | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- get pods -o                | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- get pods -o                | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- exec                       | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | busybox-67b7f59bb-47tnt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- exec                       | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | busybox-67b7f59bb-qp6pw --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- exec                       | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | busybox-67b7f59bb-47tnt --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- exec                       | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | busybox-67b7f59bb-qp6pw --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- exec                       | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | busybox-67b7f59bb-47tnt -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- exec                       | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | busybox-67b7f59bb-qp6pw -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- get pods -o                | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- exec                       | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:48 UTC | 06 Jul 23 20:48 UTC |
	|         | busybox-67b7f59bb-47tnt                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- exec                       | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:49 UTC |                     |
	|         | busybox-67b7f59bb-47tnt -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.64.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- exec                       | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:49 UTC | 06 Jul 23 20:49 UTC |
	|         | busybox-67b7f59bb-qp6pw                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-144300 -- exec                       | multinode-144300     | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:49 UTC |                     |
	|         | busybox-67b7f59bb-qp6pw -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.64.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 20:44:57
	Running on machine: minikube6
	Binary: Built with gc go1.20.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 20:44:57.479782    1540 out.go:296] Setting OutFile to fd 700 ...
	I0706 20:44:57.532095    1540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:44:57.532095    1540 out.go:309] Setting ErrFile to fd 668...
	I0706 20:44:57.532095    1540 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:44:57.551173    1540 out.go:303] Setting JSON to false
	I0706 20:44:57.553912    1540 start.go:127] hostinfo: {"hostname":"minikube6","uptime":494434,"bootTime":1688181863,"procs":146,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 20:44:57.554013    1540 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 20:44:57.558091    1540 out.go:177] * [multinode-144300] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 20:44:57.562983    1540 notify.go:220] Checking for updates...
	I0706 20:44:57.565223    1540 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:44:57.567268    1540 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 20:44:57.570094    1540 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 20:44:57.572888    1540 out.go:177]   - MINIKUBE_LOCATION=16832
	I0706 20:44:57.575804    1540 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 20:44:57.578753    1540 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 20:44:58.992229    1540 out.go:177] * Using the hyperv driver based on user configuration
	I0706 20:44:58.994395    1540 start.go:297] selected driver: hyperv
	I0706 20:44:58.994395    1540 start.go:944] validating driver "hyperv" against <nil>
	I0706 20:44:58.994395    1540 start.go:955] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 20:44:59.038413    1540 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 20:44:59.039178    1540 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 20:44:59.039178    1540 cni.go:84] Creating CNI manager for ""
	I0706 20:44:59.039178    1540 cni.go:137] 0 nodes found, recommending kindnet
	I0706 20:44:59.039178    1540 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0706 20:44:59.039178    1540 start_flags.go:319] config:
	{Name:multinode-144300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:44:59.040420    1540 iso.go:125] acquiring lock: {Name:mk7608081596fbafb2a8a8984d26768bdf174467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 20:44:59.044931    1540 out.go:177] * Starting control plane node multinode-144300 in cluster multinode-144300
	I0706 20:44:59.050396    1540 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 20:44:59.050396    1540 preload.go:148] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0706 20:44:59.050396    1540 cache.go:57] Caching tarball of preloaded images
	I0706 20:44:59.051732    1540 preload.go:174] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0706 20:44:59.051732    1540 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 20:44:59.052190    1540 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:44:59.052190    1540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json: {Name:mk93890b5825f8210bb00b86014080f69d5685e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:44:59.054496    1540 start.go:365] acquiring machines lock for multinode-144300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 20:44:59.054676    1540 start.go:369] acquired machines lock for "multinode-144300" in 179.9µs
	I0706 20:44:59.054939    1540 start.go:93] Provisioning new machine with config: &{Name:multinode-144300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 20:44:59.054978    1540 start.go:125] createHost starting for "" (driver="hyperv")
	I0706 20:44:59.059010    1540 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 20:44:59.059010    1540 start.go:159] libmachine.API.Create for "multinode-144300" (driver="hyperv")
	I0706 20:44:59.059592    1540 client.go:168] LocalClient.Create starting
	I0706 20:44:59.059859    1540 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0706 20:44:59.059859    1540 main.go:141] libmachine: Decoding PEM data...
	I0706 20:44:59.059859    1540 main.go:141] libmachine: Parsing certificate...
	I0706 20:44:59.060428    1540 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0706 20:44:59.060716    1540 main.go:141] libmachine: Decoding PEM data...
	I0706 20:44:59.060716    1540 main.go:141] libmachine: Parsing certificate...
	I0706 20:44:59.060716    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0706 20:44:59.418060    1540 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0706 20:44:59.418158    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:44:59.418158    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0706 20:44:59.934776    1540 main.go:141] libmachine: [stdout =====>] : False
	
	I0706 20:44:59.935022    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:44:59.935148    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0706 20:45:00.355863    1540 main.go:141] libmachine: [stdout =====>] : True
	
	I0706 20:45:00.355863    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:00.355863    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0706 20:45:01.658039    1540 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0706 20:45:01.658039    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:01.660569    1540 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.30.1-1688144767-16765-amd64.iso...
	I0706 20:45:02.023068    1540 main.go:141] libmachine: Creating SSH key...
	I0706 20:45:02.304622    1540 main.go:141] libmachine: Creating VM...
	I0706 20:45:02.304622    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0706 20:45:03.548400    1540 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0706 20:45:03.548587    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:03.548706    1540 main.go:141] libmachine: Using switch "Default Switch"
	I0706 20:45:03.548798    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0706 20:45:04.104514    1540 main.go:141] libmachine: [stdout =====>] : True
	
	I0706 20:45:04.104587    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:04.104587    1540 main.go:141] libmachine: Creating VHD
	I0706 20:45:04.104659    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\fixed.vhd' -SizeBytes 10MB -Fixed
	I0706 20:45:05.717483    1540 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7A2A1F76-9346-4885-AF96-E5F94FA99580
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0706 20:45:05.717483    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:05.717483    1540 main.go:141] libmachine: Writing magic tar header
	I0706 20:45:05.717483    1540 main.go:141] libmachine: Writing SSH key tar header
	I0706 20:45:05.725619    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\disk.vhd' -VHDType Dynamic -DeleteSource
	I0706 20:45:07.406853    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:07.406853    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:07.406853    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\disk.vhd' -SizeBytes 20000MB
	I0706 20:45:08.508435    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:08.508435    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:08.508435    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-144300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0706 20:45:10.294152    1540 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-144300 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0706 20:45:10.294152    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:10.294254    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-144300 -DynamicMemoryEnabled $false
	I0706 20:45:11.004688    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:11.005050    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:11.005050    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-144300 -Count 2
	I0706 20:45:11.734403    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:11.734403    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:11.734495    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-144300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\boot2docker.iso'
	I0706 20:45:12.741482    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:12.741482    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:12.741482    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-144300 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\disk.vhd'
	I0706 20:45:13.832797    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:13.832797    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:13.832797    1540 main.go:141] libmachine: Starting VM...
	I0706 20:45:13.832883    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-144300
	I0706 20:45:15.438247    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:15.438278    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:15.438345    1540 main.go:141] libmachine: Waiting for host to start...
	I0706 20:45:15.438373    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:16.100676    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:16.100676    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:16.100769    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:17.035651    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:17.035651    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:18.037847    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:18.673159    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:18.673159    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:18.673235    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:19.595356    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:19.595387    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:20.609922    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:21.275331    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:21.275564    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:21.275564    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:22.227353    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:22.227394    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:23.241257    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:23.923021    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:23.923085    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:23.923187    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:24.844971    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:24.845024    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:25.847282    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:26.499741    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:26.499741    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:26.499741    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:27.484343    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:27.484647    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:28.485124    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:29.161826    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:29.161879    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:29.161912    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:30.114383    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:30.114581    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:31.118549    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:31.786337    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:31.786337    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:31.786337    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:32.726311    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:32.726385    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:33.739967    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:34.453881    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:34.454260    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:34.454260    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:35.437579    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:45:35.437579    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:36.450275    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:37.133944    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:37.133944    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:37.133944    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:38.134740    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:38.134740    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:38.134822    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:38.853495    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:38.853698    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:38.853698    1540 machine.go:88] provisioning docker machine ...
	I0706 20:45:38.853897    1540 buildroot.go:166] provisioning hostname "multinode-144300"
	I0706 20:45:38.854007    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:39.566472    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:39.566629    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:39.566629    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:40.531995    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:40.532071    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:40.536543    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:45:40.546129    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.202 22 <nil> <nil>}
	I0706 20:45:40.546129    1540 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-144300 && echo "multinode-144300" | sudo tee /etc/hostname
	I0706 20:45:40.719482    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-144300
	
	I0706 20:45:40.719482    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:41.383176    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:41.383500    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:41.383583    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:42.319422    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:42.319422    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:42.323669    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:45:42.324438    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.202 22 <nil> <nil>}
	I0706 20:45:42.324438    1540 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-144300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-144300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-144300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 20:45:42.476908    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 20:45:42.476984    1540 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0706 20:45:42.477075    1540 buildroot.go:174] setting up certificates
	I0706 20:45:42.477144    1540 provision.go:83] configureAuth start
	I0706 20:45:42.477221    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:43.140597    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:43.140597    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:43.140597    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:44.069282    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:44.069282    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:44.069392    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:44.718968    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:44.718968    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:44.719096    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:45.666513    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:45.666513    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:45.666513    1540 provision.go:138] copyHostCerts
	I0706 20:45:45.666513    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0706 20:45:45.666513    1540 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0706 20:45:45.667039    1540 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0706 20:45:45.667504    1540 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0706 20:45:45.668405    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0706 20:45:45.668405    1540 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0706 20:45:45.668405    1540 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0706 20:45:45.668991    1540 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0706 20:45:45.670188    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0706 20:45:45.670474    1540 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0706 20:45:45.670568    1540 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0706 20:45:45.670670    1540 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0706 20:45:45.671646    1540 provision.go:112] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-144300 san=[172.29.70.202 172.29.70.202 localhost 127.0.0.1 minikube multinode-144300]
	I0706 20:45:45.818008    1540 provision.go:172] copyRemoteCerts
	I0706 20:45:45.826001    1540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 20:45:45.826001    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:46.520752    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:46.520752    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:46.520752    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:47.520059    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:47.520059    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:47.520553    1540 sshutil.go:53] new ssh client: &{IP:172.29.70.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:45:47.625050    1540 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.799036s)
	I0706 20:45:47.625106    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0706 20:45:47.625495    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0706 20:45:47.662740    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0706 20:45:47.663091    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 20:45:47.696469    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0706 20:45:47.696641    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0706 20:45:47.730295    1540 provision.go:86] duration metric: configureAuth took 5.2530262s
	I0706 20:45:47.730295    1540 buildroot.go:189] setting minikube options for container-runtime
	I0706 20:45:47.730496    1540 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:45:47.730496    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:48.421374    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:48.421374    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:48.421445    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:49.367657    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:49.367899    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:49.372103    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:45:49.373008    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.202 22 <nil> <nil>}
	I0706 20:45:49.373008    1540 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 20:45:49.515095    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 20:45:49.515095    1540 buildroot.go:70] root file system type: tmpfs
	I0706 20:45:49.515095    1540 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 20:45:49.515095    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:50.173528    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:50.173694    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:50.173694    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:51.136689    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:51.136689    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:51.140589    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:45:51.141908    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.202 22 <nil> <nil>}
	I0706 20:45:51.142094    1540 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 20:45:51.300367    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 20:45:51.300611    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:51.971188    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:51.971348    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:51.971348    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:52.934425    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:52.934597    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:52.939706    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:45:52.940666    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.202 22 <nil> <nil>}
	I0706 20:45:52.940666    1540 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 20:45:54.407754    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 20:45:54.407754    1540 machine.go:91] provisioned docker machine in 15.553742s
	I0706 20:45:54.407754    1540 client.go:171] LocalClient.Create took 55.3477525s
	I0706 20:45:54.407754    1540 start.go:167] duration metric: libmachine.API.Create for "multinode-144300" took 55.3483346s
	I0706 20:45:54.407754    1540 start.go:300] post-start starting for "multinode-144300" (driver="hyperv")
	I0706 20:45:54.407754    1540 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 20:45:54.419805    1540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 20:45:54.419805    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:55.094335    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:55.094335    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:55.094434    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:56.008559    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:56.008794    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:56.009332    1540 sshutil.go:53] new ssh client: &{IP:172.29.70.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:45:56.114795    1540 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.6949382s)
	I0706 20:45:56.124495    1540 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 20:45:56.130015    1540 command_runner.go:130] > NAME=Buildroot
	I0706 20:45:56.130015    1540 command_runner.go:130] > VERSION=2021.02.12-1-g6f2898e-dirty
	I0706 20:45:56.130015    1540 command_runner.go:130] > ID=buildroot
	I0706 20:45:56.130015    1540 command_runner.go:130] > VERSION_ID=2021.02.12
	I0706 20:45:56.130015    1540 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0706 20:45:56.130015    1540 info.go:137] Remote host: Buildroot 2021.02.12
	I0706 20:45:56.130015    1540 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0706 20:45:56.130762    1540 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0706 20:45:56.131559    1540 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> 82562.pem in /etc/ssl/certs
	I0706 20:45:56.131559    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /etc/ssl/certs/82562.pem
	I0706 20:45:56.141080    1540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 20:45:56.155027    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /etc/ssl/certs/82562.pem (1708 bytes)
	I0706 20:45:56.192552    1540 start.go:303] post-start completed in 1.7847848s
	I0706 20:45:56.195645    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:56.858514    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:56.858514    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:56.858514    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:57.804454    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:57.804524    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:57.804849    1540 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:45:57.806573    1540 start.go:128] duration metric: createHost completed in 58.7511604s
	I0706 20:45:57.806573    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:45:58.468761    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:45:58.468761    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:58.468761    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:45:59.461031    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:45:59.461031    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:45:59.465217    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:45:59.465588    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.202 22 <nil> <nil>}
	I0706 20:45:59.466172    1540 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0706 20:45:59.604795    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688676359.605516743
	
	I0706 20:45:59.604905    1540 fix.go:206] guest clock: 1688676359.605516743
	I0706 20:45:59.604905    1540 fix.go:219] Guest: 2023-07-06 20:45:59.605516743 +0000 UTC Remote: 2023-07-06 20:45:57.8065733 +0000 UTC m=+60.384922201 (delta=1.798943443s)
	I0706 20:45:59.604977    1540 fix.go:190] guest clock delta is within tolerance: 1.798943443s
	I0706 20:45:59.604977    1540 start.go:83] releasing machines lock for "multinode-144300", held for 1m0.5498526s
	I0706 20:45:59.605258    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:46:00.265688    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:46:00.265688    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:00.265688    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:46:01.198260    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:46:01.198260    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:01.201614    1540 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 20:46:01.201872    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:46:01.208545    1540 ssh_runner.go:195] Run: cat /version.json
	I0706 20:46:01.208545    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:46:01.927231    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:46:01.927231    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:01.927231    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:46:01.927231    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:46:01.927231    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:01.927231    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:46:02.983387    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:46:02.983650    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:02.983718    1540 sshutil.go:53] new ssh client: &{IP:172.29.70.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:46:03.002814    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:46:03.002814    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:03.002814    1540 sshutil.go:53] new ssh client: &{IP:172.29.70.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:46:03.091617    1540 command_runner.go:130] > {"iso_version": "v1.30.1-1688144767-16765", "kicbase_version": "v0.0.39-1687538068-16731", "minikube_version": "v1.30.1", "commit": "ea1fcc3c7b384862404a5ec9a04bec1496959f9b"}
	I0706 20:46:03.091910    1540 ssh_runner.go:235] Completed: cat /version.json: (1.8833511s)
	I0706 20:46:03.103408    1540 ssh_runner.go:195] Run: systemctl --version
	I0706 20:46:03.173205    1540 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0706 20:46:03.173262    1540 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (1.9714698s)
	I0706 20:46:03.173378    1540 command_runner.go:130] > systemd 247 (247)
	I0706 20:46:03.173406    1540 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0706 20:46:03.182759    1540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0706 20:46:03.189427    1540 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0706 20:46:03.190288    1540 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 20:46:03.199285    1540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0706 20:46:03.220371    1540 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0706 20:46:03.220371    1540 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0706 20:46:03.220371    1540 start.go:466] detecting cgroup driver to use...
	I0706 20:46:03.220371    1540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 20:46:03.244704    1540 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0706 20:46:03.254685    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0706 20:46:03.277353    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 20:46:03.292387    1540 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 20:46:03.300554    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 20:46:03.324738    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 20:46:03.348503    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 20:46:03.372103    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 20:46:03.396892    1540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 20:46:03.418508    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 20:46:03.443303    1540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 20:46:03.457513    1540 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0706 20:46:03.467094    1540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 20:46:03.491996    1540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:46:03.649551    1540 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 20:46:03.672155    1540 start.go:466] detecting cgroup driver to use...
	I0706 20:46:03.680568    1540 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 20:46:03.700730    1540 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0706 20:46:03.700730    1540 command_runner.go:130] > [Unit]
	I0706 20:46:03.700730    1540 command_runner.go:130] > Description=Docker Application Container Engine
	I0706 20:46:03.700730    1540 command_runner.go:130] > Documentation=https://docs.docker.com
	I0706 20:46:03.700730    1540 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0706 20:46:03.700730    1540 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0706 20:46:03.700730    1540 command_runner.go:130] > StartLimitBurst=3
	I0706 20:46:03.700730    1540 command_runner.go:130] > StartLimitIntervalSec=60
	I0706 20:46:03.700730    1540 command_runner.go:130] > [Service]
	I0706 20:46:03.700730    1540 command_runner.go:130] > Type=notify
	I0706 20:46:03.700730    1540 command_runner.go:130] > Restart=on-failure
	I0706 20:46:03.700730    1540 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0706 20:46:03.700730    1540 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0706 20:46:03.700730    1540 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0706 20:46:03.700730    1540 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0706 20:46:03.700730    1540 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0706 20:46:03.700730    1540 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0706 20:46:03.700730    1540 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0706 20:46:03.700730    1540 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0706 20:46:03.700730    1540 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0706 20:46:03.700730    1540 command_runner.go:130] > ExecStart=
	I0706 20:46:03.700730    1540 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0706 20:46:03.700730    1540 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0706 20:46:03.700730    1540 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0706 20:46:03.700730    1540 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0706 20:46:03.700730    1540 command_runner.go:130] > LimitNOFILE=infinity
	I0706 20:46:03.700730    1540 command_runner.go:130] > LimitNPROC=infinity
	I0706 20:46:03.700730    1540 command_runner.go:130] > LimitCORE=infinity
	I0706 20:46:03.700730    1540 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0706 20:46:03.700730    1540 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0706 20:46:03.701263    1540 command_runner.go:130] > TasksMax=infinity
	I0706 20:46:03.701263    1540 command_runner.go:130] > TimeoutStartSec=0
	I0706 20:46:03.701263    1540 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0706 20:46:03.701263    1540 command_runner.go:130] > Delegate=yes
	I0706 20:46:03.701263    1540 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0706 20:46:03.701263    1540 command_runner.go:130] > KillMode=process
	I0706 20:46:03.701381    1540 command_runner.go:130] > [Install]
	I0706 20:46:03.701381    1540 command_runner.go:130] > WantedBy=multi-user.target
	I0706 20:46:03.709340    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 20:46:03.732766    1540 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 20:46:03.768003    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 20:46:03.792254    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 20:46:03.817126    1540 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0706 20:46:03.874251    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 20:46:03.891726    1540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 20:46:03.917284    1540 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0706 20:46:03.926358    1540 ssh_runner.go:195] Run: which cri-dockerd
	I0706 20:46:03.933988    1540 command_runner.go:130] > /usr/bin/cri-dockerd
	I0706 20:46:03.944371    1540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 20:46:03.965248    1540 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 20:46:04.001280    1540 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 20:46:04.141511    1540 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 20:46:04.265495    1540 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 20:46:04.265495    1540 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 20:46:04.304005    1540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:46:04.439664    1540 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 20:46:05.918905    1540 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4792297s)
	I0706 20:46:05.927791    1540 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 20:46:06.074803    1540 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0706 20:46:06.230806    1540 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 20:46:06.370818    1540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:46:06.508979    1540 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0706 20:46:06.538629    1540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:46:06.673952    1540 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0706 20:46:06.755427    1540 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0706 20:46:06.765164    1540 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0706 20:46:06.772377    1540 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0706 20:46:06.772377    1540 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0706 20:46:06.772377    1540 command_runner.go:130] > Device: 16h/22d	Inode: 905         Links: 1
	I0706 20:46:06.772377    1540 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0706 20:46:06.772377    1540 command_runner.go:130] > Access: 2023-07-06 20:46:06.691804098 +0000
	I0706 20:46:06.772377    1540 command_runner.go:130] > Modify: 2023-07-06 20:46:06.691804098 +0000
	I0706 20:46:06.772377    1540 command_runner.go:130] > Change: 2023-07-06 20:46:06.695804329 +0000
	I0706 20:46:06.772377    1540 command_runner.go:130] >  Birth: -
	I0706 20:46:06.772377    1540 start.go:534] Will wait 60s for crictl version
	I0706 20:46:06.783311    1540 ssh_runner.go:195] Run: which crictl
	I0706 20:46:06.788188    1540 command_runner.go:130] > /usr/bin/crictl
	I0706 20:46:06.795636    1540 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0706 20:46:06.841732    1540 command_runner.go:130] > Version:  0.1.0
	I0706 20:46:06.841732    1540 command_runner.go:130] > RuntimeName:  docker
	I0706 20:46:06.841732    1540 command_runner.go:130] > RuntimeVersion:  24.0.2
	I0706 20:46:06.841732    1540 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0706 20:46:06.841732    1540 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0706 20:46:06.847740    1540 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 20:46:06.875474    1540 command_runner.go:130] > 24.0.2
	I0706 20:46:06.881993    1540 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 20:46:06.908854    1540 command_runner.go:130] > 24.0.2
	I0706 20:46:06.916599    1540 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0706 20:46:06.916599    1540 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0706 20:46:06.922014    1540 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0706 20:46:06.922014    1540 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0706 20:46:06.922014    1540 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0706 20:46:06.922014    1540 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:76:79 Flags:up|broadcast|multicast|running}
	I0706 20:46:06.925064    1540 ip.go:210] interface addr: fe80::9492:57c6:5513:d3cc/64
	I0706 20:46:06.925064    1540 ip.go:210] interface addr: 172.29.64.1/20
	I0706 20:46:06.931926    1540 ssh_runner.go:195] Run: grep 172.29.64.1	host.minikube.internal$ /etc/hosts
	I0706 20:46:06.936890    1540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 20:46:06.957013    1540 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 20:46:06.963260    1540 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 20:46:06.985386    1540 docker.go:636] Got preloaded images: 
	I0706 20:46:06.985386    1540 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0706 20:46:06.994577    1540 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0706 20:46:07.008026    1540 command_runner.go:139] > {"Repositories":{}}
	I0706 20:46:07.016743    1540 ssh_runner.go:195] Run: which lz4
	I0706 20:46:07.021294    1540 command_runner.go:130] > /usr/bin/lz4
	I0706 20:46:07.021912    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0706 20:46:07.030369    1540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0706 20:46:07.034987    1540 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0706 20:46:07.035230    1540 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0706 20:46:07.035361    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (412285949 bytes)
	I0706 20:46:09.207159    1540 docker.go:600] Took 2.184907 seconds to copy over tarball
	I0706 20:46:09.216823    1540 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0706 20:46:18.445318    1540 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2283693s)
	I0706 20:46:18.445373    1540 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0706 20:46:18.501479    1540 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0706 20:46:18.515390    1540 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.10.1":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e":"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.7-0":"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83":"sha256:86b6af7dd652c1b38118be1c338e
9354b33469e69a218f7e290a0ca5304ad681"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.27.3":"sha256:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","registry.k8s.io/kube-apiserver@sha256:fd03335dd2e7163e5e36e933a0c735d7fec6f42b33ddafad0bc54f333e4a23c0":"sha256:08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.27.3":"sha256:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","registry.k8s.io/kube-controller-manager@sha256:1ad8df2b525e7270cbad6fd613c4f668e336edb4436f440e49b34c4cec4fac9e":"sha256:7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.27.3":"sha256:5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","registry.k8s.io/kube-proxy@sha256:fb2bd59aae959e9649cb34101b66bb3c65f61eee9f3f81e40ed1e2325c92e699":"sha256:5780543258cf06f98595c003c0c6d22768d1fc8e9852e28390
18a4bb3bfe163c"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.27.3":"sha256:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","registry.k8s.io/kube-scheduler@sha256:77b8db7564e395328905beb74a0b9a5db3218a4b16ec19af174957e518df40c8":"sha256:41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0706 20:46:18.515443    1540 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0706 20:46:18.548944    1540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:46:18.696517    1540 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 20:46:21.008176    1540 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3116069s)
	I0706 20:46:21.016509    1540 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 20:46:21.038857    1540 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.3
	I0706 20:46:21.039672    1540 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.3
	I0706 20:46:21.039672    1540 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.3
	I0706 20:46:21.039787    1540 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.3
	I0706 20:46:21.039787    1540 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0706 20:46:21.039787    1540 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0706 20:46:21.039787    1540 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0706 20:46:21.039787    1540 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 20:46:21.039787    1540 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0706 20:46:21.039787    1540 cache_images.go:84] Images are preloaded, skipping loading
	I0706 20:46:21.049210    1540 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0706 20:46:21.083419    1540 command_runner.go:130] > cgroupfs
	I0706 20:46:21.084333    1540 cni.go:84] Creating CNI manager for ""
	I0706 20:46:21.084333    1540 cni.go:137] 1 nodes found, recommending kindnet
	I0706 20:46:21.084333    1540 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0706 20:46:21.084333    1540 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.70.202 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-144300 NodeName:multinode-144300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.70.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.70.202 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0706 20:46:21.084333    1540 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.70.202
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-144300"
	  kubeletExtraArgs:
	    node-ip: 172.29.70.202
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.70.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0706 20:46:21.084333    1540 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-144300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.70.202
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0706 20:46:21.094434    1540 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0706 20:46:21.108290    1540 command_runner.go:130] > kubeadm
	I0706 20:46:21.108369    1540 command_runner.go:130] > kubectl
	I0706 20:46:21.108369    1540 command_runner.go:130] > kubelet
	I0706 20:46:21.108489    1540 binaries.go:44] Found k8s binaries, skipping transfer
	I0706 20:46:21.120945    1540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0706 20:46:21.133510    1540 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0706 20:46:21.156566    1540 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0706 20:46:21.179905    1540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0706 20:46:21.215615    1540 ssh_runner.go:195] Run: grep 172.29.70.202	control-plane.minikube.internal$ /etc/hosts
	I0706 20:46:21.220523    1540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.70.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 20:46:21.240828    1540 certs.go:56] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300 for IP: 172.29.70.202
	I0706 20:46:21.240881    1540 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:46:21.241302    1540 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0706 20:46:21.242062    1540 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0706 20:46:21.242964    1540 certs.go:319] generating minikube-user signed cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\client.key
	I0706 20:46:21.243168    1540 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\client.crt with IP's: []
	I0706 20:46:21.546492    1540 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\client.crt ...
	I0706 20:46:21.547201    1540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\client.crt: {Name:mkde0a723e95d63f844fe435b99d4b47a5e55d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:46:21.548537    1540 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\client.key ...
	I0706 20:46:21.548537    1540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\client.key: {Name:mk31fd9676bed65c61de378a2a88f88bd899061d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:46:21.549891    1540 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key.6618c10a
	I0706 20:46:21.550116    1540 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt.6618c10a with IP's: [172.29.70.202 10.96.0.1 127.0.0.1 10.0.0.1]
	I0706 20:46:21.663883    1540 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt.6618c10a ...
	I0706 20:46:21.663883    1540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt.6618c10a: {Name:mk79a6f6bf1cf85727c4745faad7d6130d6d9cb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:46:21.664943    1540 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key.6618c10a ...
	I0706 20:46:21.665927    1540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key.6618c10a: {Name:mkbcdbc4a87c96820ebf8e7941d12e160f2b5e6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:46:21.665927    1540 certs.go:337] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt.6618c10a -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt
	I0706 20:46:21.677899    1540 certs.go:341] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key.6618c10a -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key
	I0706 20:46:21.678897    1540 certs.go:319] generating aggregator signed cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.key
	I0706 20:46:21.678897    1540 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.crt with IP's: []
	I0706 20:46:21.909648    1540 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.crt ...
	I0706 20:46:21.909648    1540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.crt: {Name:mkfc8a78f1cb3ad4df353c5281c2d0b2e53ca8c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:46:21.910641    1540 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.key ...
	I0706 20:46:21.910641    1540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.key: {Name:mk3e8828a181f23026fa1bb3ac61f3a84288bf44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:46:21.911800    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0706 20:46:21.912829    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0706 20:46:21.912829    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0706 20:46:21.921623    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0706 20:46:21.921623    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0706 20:46:21.921623    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0706 20:46:21.921623    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0706 20:46:21.921623    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0706 20:46:21.922806    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem (1338 bytes)
	W0706 20:46:21.923374    1540 certs.go:433] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256_empty.pem, impossibly tiny 0 bytes
	I0706 20:46:21.923414    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0706 20:46:21.923644    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0706 20:46:21.923930    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0706 20:46:21.924173    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0706 20:46:21.924404    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem (1708 bytes)
	I0706 20:46:21.924946    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem -> /usr/share/ca-certificates/8256.pem
	I0706 20:46:21.925056    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /usr/share/ca-certificates/82562.pem
	I0706 20:46:21.925187    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:46:21.925337    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0706 20:46:21.966593    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0706 20:46:21.999401    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0706 20:46:22.032262    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0706 20:46:22.070575    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0706 20:46:22.103333    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0706 20:46:22.138622    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0706 20:46:22.169593    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0706 20:46:22.203204    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem --> /usr/share/ca-certificates/8256.pem (1338 bytes)
	I0706 20:46:22.240316    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /usr/share/ca-certificates/82562.pem (1708 bytes)
	I0706 20:46:22.277934    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0706 20:46:22.310778    1540 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0706 20:46:22.349788    1540 ssh_runner.go:195] Run: openssl version
	I0706 20:46:22.357307    1540 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0706 20:46:22.365658    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0706 20:46:22.389520    1540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:46:22.395623    1540 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:46:22.395697    1540 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:46:22.403431    1540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:46:22.410792    1540 command_runner.go:130] > b5213941
	I0706 20:46:22.421736    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0706 20:46:22.443891    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8256.pem && ln -fs /usr/share/ca-certificates/8256.pem /etc/ssl/certs/8256.pem"
	I0706 20:46:22.466494    1540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8256.pem
	I0706 20:46:22.472629    1540 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 20:46:22.472629    1540 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 20:46:22.480810    1540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8256.pem
	I0706 20:46:22.489147    1540 command_runner.go:130] > 51391683
	I0706 20:46:22.498045    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8256.pem /etc/ssl/certs/51391683.0"
	I0706 20:46:22.520870    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82562.pem && ln -fs /usr/share/ca-certificates/82562.pem /etc/ssl/certs/82562.pem"
	I0706 20:46:22.547019    1540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82562.pem
	I0706 20:46:22.553781    1540 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 20:46:22.553904    1540 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 20:46:22.563693    1540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82562.pem
	I0706 20:46:22.570255    1540 command_runner.go:130] > 3ec20f2e
	I0706 20:46:22.578300    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/82562.pem /etc/ssl/certs/3ec20f2e.0"
	I0706 20:46:22.601128    1540 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0706 20:46:22.605987    1540 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 20:46:22.606449    1540 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 20:46:22.606975    1540 kubeadm.go:404] StartCluster: {Name:multinode-144300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.70.202 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:46:22.613209    1540 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0706 20:46:22.645069    1540 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0706 20:46:22.658488    1540 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0706 20:46:22.658534    1540 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0706 20:46:22.658560    1540 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0706 20:46:22.667469    1540 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0706 20:46:22.690350    1540 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0706 20:46:22.702377    1540 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0706 20:46:22.702377    1540 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0706 20:46:22.703432    1540 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0706 20:46:22.703464    1540 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0706 20:46:22.703685    1540 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0706 20:46:22.703879    1540 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0706 20:46:23.328300    1540 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0706 20:46:23.328300    1540 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0706 20:46:35.973665    1540 command_runner.go:130] > [init] Using Kubernetes version: v1.27.3
	I0706 20:46:35.973665    1540 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0706 20:46:35.973833    1540 command_runner.go:130] > [preflight] Running pre-flight checks
	I0706 20:46:35.973833    1540 kubeadm.go:322] [preflight] Running pre-flight checks
	I0706 20:46:35.974082    1540 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0706 20:46:35.974082    1540 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0706 20:46:35.974327    1540 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0706 20:46:35.974389    1540 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0706 20:46:35.974575    1540 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0706 20:46:35.974628    1540 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0706 20:46:35.974792    1540 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0706 20:46:35.978090    1540 out.go:204]   - Generating certificates and keys ...
	I0706 20:46:35.974845    1540 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0706 20:46:35.978238    1540 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0706 20:46:35.978361    1540 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0706 20:46:35.978656    1540 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0706 20:46:35.978656    1540 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0706 20:46:35.978953    1540 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0706 20:46:35.979092    1540 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0706 20:46:35.979092    1540 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0706 20:46:35.979092    1540 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0706 20:46:35.979260    1540 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0706 20:46:35.979260    1540 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0706 20:46:35.979436    1540 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0706 20:46:35.979436    1540 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0706 20:46:35.979667    1540 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0706 20:46:35.979667    1540 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0706 20:46:35.979913    1540 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-144300] and IPs [172.29.70.202 127.0.0.1 ::1]
	I0706 20:46:35.979913    1540 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-144300] and IPs [172.29.70.202 127.0.0.1 ::1]
	I0706 20:46:35.980022    1540 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0706 20:46:35.980129    1540 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0706 20:46:35.980254    1540 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-144300] and IPs [172.29.70.202 127.0.0.1 ::1]
	I0706 20:46:35.980340    1540 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-144300] and IPs [172.29.70.202 127.0.0.1 ::1]
	I0706 20:46:35.980419    1540 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0706 20:46:35.980419    1540 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0706 20:46:35.980685    1540 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0706 20:46:35.980685    1540 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0706 20:46:35.980785    1540 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0706 20:46:35.980785    1540 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0706 20:46:35.980898    1540 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0706 20:46:35.980957    1540 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0706 20:46:35.981142    1540 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0706 20:46:35.981174    1540 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0706 20:46:35.981366    1540 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0706 20:46:35.981366    1540 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0706 20:46:35.981590    1540 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0706 20:46:35.981590    1540 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0706 20:46:35.981808    1540 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0706 20:46:35.981808    1540 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0706 20:46:35.982095    1540 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0706 20:46:35.982180    1540 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0706 20:46:35.982505    1540 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0706 20:46:35.982505    1540 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0706 20:46:35.982642    1540 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0706 20:46:35.982698    1540 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0706 20:46:35.982880    1540 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0706 20:46:35.982963    1540 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0706 20:46:35.985429    1540 out.go:204]   - Booting up control plane ...
	I0706 20:46:35.985657    1540 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0706 20:46:35.985716    1540 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0706 20:46:35.985966    1540 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0706 20:46:35.986010    1540 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0706 20:46:35.986162    1540 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0706 20:46:35.986218    1540 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0706 20:46:35.986561    1540 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0706 20:46:35.986561    1540 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0706 20:46:35.986961    1540 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0706 20:46:35.986961    1540 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0706 20:46:35.987278    1540 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.505857 seconds
	I0706 20:46:35.987278    1540 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505857 seconds
	I0706 20:46:35.987609    1540 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0706 20:46:35.987680    1540 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0706 20:46:35.987996    1540 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0706 20:46:35.988049    1540 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0706 20:46:35.988203    1540 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0706 20:46:35.988256    1540 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0706 20:46:35.988645    1540 kubeadm.go:322] [mark-control-plane] Marking the node multinode-144300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0706 20:46:35.988645    1540 command_runner.go:130] > [mark-control-plane] Marking the node multinode-144300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0706 20:46:35.988737    1540 command_runner.go:130] > [bootstrap-token] Using token: qvomz1.0symwxkiok30st95
	I0706 20:46:35.988817    1540 kubeadm.go:322] [bootstrap-token] Using token: qvomz1.0symwxkiok30st95
	I0706 20:46:35.991669    1540 out.go:204]   - Configuring RBAC rules ...
	I0706 20:46:35.991669    1540 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0706 20:46:35.991669    1540 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0706 20:46:35.991669    1540 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0706 20:46:35.991669    1540 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0706 20:46:35.991669    1540 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0706 20:46:35.991669    1540 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0706 20:46:35.992820    1540 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0706 20:46:35.992873    1540 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0706 20:46:35.993143    1540 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0706 20:46:35.993143    1540 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0706 20:46:35.993355    1540 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0706 20:46:35.993407    1540 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0706 20:46:35.993669    1540 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0706 20:46:35.993722    1540 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0706 20:46:35.993830    1540 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0706 20:46:35.993830    1540 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0706 20:46:35.993940    1540 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0706 20:46:35.993995    1540 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0706 20:46:35.994047    1540 kubeadm.go:322] 
	I0706 20:46:35.994151    1540 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0706 20:46:35.994203    1540 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0706 20:46:35.994203    1540 kubeadm.go:322] 
	I0706 20:46:35.994419    1540 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0706 20:46:35.994419    1540 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0706 20:46:35.994471    1540 kubeadm.go:322] 
	I0706 20:46:35.994526    1540 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0706 20:46:35.994526    1540 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0706 20:46:35.994639    1540 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0706 20:46:35.994639    1540 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0706 20:46:35.994691    1540 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0706 20:46:35.994691    1540 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0706 20:46:35.994841    1540 kubeadm.go:322] 
	I0706 20:46:35.995009    1540 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0706 20:46:35.995009    1540 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0706 20:46:35.995064    1540 kubeadm.go:322] 
	I0706 20:46:35.995223    1540 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0706 20:46:35.995223    1540 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0706 20:46:35.995223    1540 kubeadm.go:322] 
	I0706 20:46:35.995439    1540 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0706 20:46:35.995439    1540 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0706 20:46:35.995653    1540 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0706 20:46:35.995653    1540 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0706 20:46:35.995653    1540 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0706 20:46:35.995653    1540 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0706 20:46:35.995653    1540 kubeadm.go:322] 
	I0706 20:46:35.995653    1540 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0706 20:46:35.995653    1540 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0706 20:46:35.996186    1540 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0706 20:46:35.996222    1540 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0706 20:46:35.996322    1540 kubeadm.go:322] 
	I0706 20:46:35.996322    1540 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token qvomz1.0symwxkiok30st95 \
	I0706 20:46:35.996322    1540 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token qvomz1.0symwxkiok30st95 \
	I0706 20:46:35.996322    1540 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d \
	I0706 20:46:35.996322    1540 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d \
	I0706 20:46:35.996322    1540 kubeadm.go:322] 	--control-plane 
	I0706 20:46:35.996322    1540 command_runner.go:130] > 	--control-plane 
	I0706 20:46:35.996322    1540 kubeadm.go:322] 
	I0706 20:46:35.997087    1540 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0706 20:46:35.997087    1540 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0706 20:46:35.997087    1540 kubeadm.go:322] 
	I0706 20:46:35.997087    1540 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token qvomz1.0symwxkiok30st95 \
	I0706 20:46:35.997087    1540 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token qvomz1.0symwxkiok30st95 \
	I0706 20:46:35.997852    1540 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d 
	I0706 20:46:35.997890    1540 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d 
	I0706 20:46:35.997890    1540 cni.go:84] Creating CNI manager for ""
	I0706 20:46:35.997890    1540 cni.go:137] 1 nodes found, recommending kindnet
	I0706 20:46:36.000233    1540 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0706 20:46:36.013136    1540 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0706 20:46:36.019876    1540 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0706 20:46:36.019876    1540 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0706 20:46:36.019931    1540 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0706 20:46:36.019931    1540 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0706 20:46:36.019931    1540 command_runner.go:130] > Access: 2023-07-06 20:45:37.832339800 +0000
	I0706 20:46:36.019931    1540 command_runner.go:130] > Modify: 2023-06-30 22:28:30.000000000 +0000
	I0706 20:46:36.019985    1540 command_runner.go:130] > Change: 2023-07-06 20:45:29.423000000 +0000
	I0706 20:46:36.019985    1540 command_runner.go:130] >  Birth: -
	I0706 20:46:36.020667    1540 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0706 20:46:36.020667    1540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0706 20:46:36.080336    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0706 20:46:37.534339    1540 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0706 20:46:37.536481    1540 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0706 20:46:37.536580    1540 command_runner.go:130] > serviceaccount/kindnet created
	I0706 20:46:37.536580    1540 command_runner.go:130] > daemonset.apps/kindnet created
	I0706 20:46:37.536580    1540 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.4562329s)
	I0706 20:46:37.536707    1540 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0706 20:46:37.546573    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=4d384f293eb4d1ae13e8a16440afa4ec48ef3148 minikube.k8s.io/name=multinode-144300 minikube.k8s.io/updated_at=2023_07_06T20_46_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:37.547555    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:37.563534    1540 command_runner.go:130] > -16
	I0706 20:46:37.563534    1540 ops.go:34] apiserver oom_adj: -16
	I0706 20:46:37.723592    1540 command_runner.go:130] > node/multinode-144300 labeled
	I0706 20:46:37.723651    1540 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0706 20:46:37.733333    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:37.849465    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:38.371245    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:38.475027    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:38.870244    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:38.976097    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:39.373886    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:39.466808    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:39.875267    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:39.986636    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:40.374315    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:40.493724    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:40.861872    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:40.964495    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:41.366342    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:41.479489    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:41.870670    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:41.976995    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:42.377048    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:42.475960    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:42.863513    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:42.959641    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:43.365657    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:43.467358    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:43.870318    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:43.975356    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:44.373136    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:44.487517    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:44.871552    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:44.986726    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:45.373209    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:45.483998    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:45.875071    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:45.977641    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:46.374712    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:46.469597    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:46.873469    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:46.981763    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:47.374719    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:47.488923    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:47.862494    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:47.955049    1540 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0706 20:46:48.368076    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0706 20:46:48.668964    1540 command_runner.go:130] > NAME      SECRETS   AGE
	I0706 20:46:48.669027    1540 command_runner.go:130] > default   0         0s
	I0706 20:46:48.674358    1540 kubeadm.go:1081] duration metric: took 11.1375416s to wait for elevateKubeSystemPrivileges.
	I0706 20:46:48.674482    1540 kubeadm.go:406] StartCluster complete in 26.0673094s
	I0706 20:46:48.674482    1540 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:46:48.674793    1540 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:46:48.676329    1540 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:46:48.677521    1540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0706 20:46:48.677521    1540 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0706 20:46:48.677521    1540 addons.go:66] Setting storage-provisioner=true in profile "multinode-144300"
	I0706 20:46:48.677521    1540 addons.go:66] Setting default-storageclass=true in profile "multinode-144300"
	I0706 20:46:48.677521    1540 addons.go:228] Setting addon storage-provisioner=true in "multinode-144300"
	I0706 20:46:48.677521    1540 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-144300"
	I0706 20:46:48.678074    1540 host.go:66] Checking if "multinode-144300" exists ...
	I0706 20:46:48.678220    1540 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:46:48.678864    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:46:48.680124    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:46:48.697473    1540 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:46:48.698191    1540 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.70.202:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 20:46:48.699536    1540 cert_rotation.go:137] Starting client certificate rotation controller
	I0706 20:46:48.700236    1540 round_trippers.go:463] GET https://172.29.70.202:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0706 20:46:48.700236    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:48.700236    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:48.700236    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:48.771319    1540 round_trippers.go:574] Response Status: 200 OK in 71 milliseconds
	I0706 20:46:48.771319    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:48.771319    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:48 GMT
	I0706 20:46:48.771319    1540 round_trippers.go:580]     Audit-Id: c5e5136c-4e71-45cf-81dc-ccabdb05d0a9
	I0706 20:46:48.771319    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:48.771319    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:48.771319    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:48.771319    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:48.771319    1540 round_trippers.go:580]     Content-Length: 291
	I0706 20:46:48.771319    1540 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b60ac185-d6d8-49e6-a9b3-f1d59eb7807f","resourceVersion":"351","creationTimestamp":"2023-07-06T20:46:35Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0706 20:46:48.772729    1540 request.go:1188] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b60ac185-d6d8-49e6-a9b3-f1d59eb7807f","resourceVersion":"351","creationTimestamp":"2023-07-06T20:46:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0706 20:46:48.772852    1540 round_trippers.go:463] PUT https://172.29.70.202:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0706 20:46:48.772986    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:48.773053    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:48.773173    1540 round_trippers.go:473]     Content-Type: application/json
	I0706 20:46:48.773173    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:48.825460    1540 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0706 20:46:48.825460    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:48.825460    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:48.825460    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:48.825460    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:48.825460    1540 round_trippers.go:580]     Content-Length: 291
	I0706 20:46:48.825460    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:48 GMT
	I0706 20:46:48.825460    1540 round_trippers.go:580]     Audit-Id: 4a9a5ad2-2613-4739-b3e4-99d172653e2a
	I0706 20:46:48.825460    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:48.825460    1540 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b60ac185-d6d8-49e6-a9b3-f1d59eb7807f","resourceVersion":"360","creationTimestamp":"2023-07-06T20:46:35Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0706 20:46:48.966928    1540 command_runner.go:130] > apiVersion: v1
	I0706 20:46:48.966989    1540 command_runner.go:130] > data:
	I0706 20:46:48.967064    1540 command_runner.go:130] >   Corefile: |
	I0706 20:46:48.967064    1540 command_runner.go:130] >     .:53 {
	I0706 20:46:48.967064    1540 command_runner.go:130] >         errors
	I0706 20:46:48.967122    1540 command_runner.go:130] >         health {
	I0706 20:46:48.967122    1540 command_runner.go:130] >            lameduck 5s
	I0706 20:46:48.967122    1540 command_runner.go:130] >         }
	I0706 20:46:48.967183    1540 command_runner.go:130] >         ready
	I0706 20:46:48.967183    1540 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0706 20:46:48.967249    1540 command_runner.go:130] >            pods insecure
	I0706 20:46:48.967305    1540 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0706 20:46:48.967383    1540 command_runner.go:130] >            ttl 30
	I0706 20:46:48.967439    1540 command_runner.go:130] >         }
	I0706 20:46:48.967439    1540 command_runner.go:130] >         prometheus :9153
	I0706 20:46:48.967517    1540 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0706 20:46:48.967517    1540 command_runner.go:130] >            max_concurrent 1000
	I0706 20:46:48.967589    1540 command_runner.go:130] >         }
	I0706 20:46:48.967640    1540 command_runner.go:130] >         cache 30
	I0706 20:46:48.967640    1540 command_runner.go:130] >         loop
	I0706 20:46:48.967777    1540 command_runner.go:130] >         reload
	I0706 20:46:48.967822    1540 command_runner.go:130] >         loadbalance
	I0706 20:46:48.967822    1540 command_runner.go:130] >     }
	I0706 20:46:48.967885    1540 command_runner.go:130] > kind: ConfigMap
	I0706 20:46:48.967948    1540 command_runner.go:130] > metadata:
	I0706 20:46:48.967948    1540 command_runner.go:130] >   creationTimestamp: "2023-07-06T20:46:35Z"
	I0706 20:46:48.967948    1540 command_runner.go:130] >   name: coredns
	I0706 20:46:48.967948    1540 command_runner.go:130] >   namespace: kube-system
	I0706 20:46:48.967948    1540 command_runner.go:130] >   resourceVersion: "260"
	I0706 20:46:48.967948    1540 command_runner.go:130] >   uid: d3d70a42-f7a8-414d-bd9e-06da3ba34172
	I0706 20:46:48.967948    1540 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.29.64.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0706 20:46:49.338184    1540 round_trippers.go:463] GET https://172.29.70.202:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0706 20:46:49.338184    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:49.338311    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:49.338311    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:49.343659    1540 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:46:49.343659    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:49.343961    1540 round_trippers.go:580]     Audit-Id: f93ed820-f37f-4371-9b76-e3bd5c3883a2
	I0706 20:46:49.343961    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:49.343961    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:49.343961    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:49.343961    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:49.343961    1540 round_trippers.go:580]     Content-Length: 291
	I0706 20:46:49.343961    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:49 GMT
	I0706 20:46:49.344038    1540 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b60ac185-d6d8-49e6-a9b3-f1d59eb7807f","resourceVersion":"399","creationTimestamp":"2023-07-06T20:46:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0706 20:46:49.344258    1540 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-144300" context rescaled to 1 replicas
	I0706 20:46:49.344258    1540 start.go:223] Will wait 6m0s for node &{Name: IP:172.29.70.202 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 20:46:49.349179    1540 out.go:177] * Verifying Kubernetes components...
	I0706 20:46:49.360856    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:46:49.461997    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:46:49.462060    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:46:49.462120    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:49.462060    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:49.467335    1540 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 20:46:49.463444    1540 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:46:49.469920    1540 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0706 20:46:49.469920    1540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0706 20:46:49.469920    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:46:49.469920    1540 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.70.202:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 20:46:49.470771    1540 round_trippers.go:463] GET https://172.29.70.202:8443/apis/storage.k8s.io/v1/storageclasses
	I0706 20:46:49.470771    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:49.471337    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:49.471337    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:49.477898    1540 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:46:49.477954    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:49.477954    1540 round_trippers.go:580]     Content-Length: 109
	I0706 20:46:49.477954    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:49 GMT
	I0706 20:46:49.478023    1540 round_trippers.go:580]     Audit-Id: d3db6726-4968-48f3-abec-5eac2d0cb4c4
	I0706 20:46:49.478023    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:49.478112    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:49.478112    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:49.478112    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:49.478166    1540 request.go:1188] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"399"},"items":[]}
	I0706 20:46:49.478424    1540 addons.go:228] Setting addon default-storageclass=true in "multinode-144300"
	I0706 20:46:49.478554    1540 host.go:66] Checking if "multinode-144300" exists ...
	I0706 20:46:49.479914    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:46:49.869586    1540 command_runner.go:130] > configmap/coredns replaced
	I0706 20:46:49.869586    1540 start.go:901] {"host.minikube.internal": 172.29.64.1} host record injected into CoreDNS's ConfigMap
	I0706 20:46:49.870993    1540 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:46:49.871613    1540 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.70.202:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 20:46:49.872214    1540 node_ready.go:35] waiting up to 6m0s for node "multinode-144300" to be "Ready" ...
	I0706 20:46:49.872214    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:49.872214    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:49.872754    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:49.872754    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:49.876611    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:46:49.876611    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:49.877039    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:49.877039    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:49.877039    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:49 GMT
	I0706 20:46:49.877039    1540 round_trippers.go:580]     Audit-Id: 0d1938a6-6606-4350-8d3e-ec5088f97fd1
	I0706 20:46:49.877118    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:49.877118    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:49.877369    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:50.237285    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:46:50.237508    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:50.237508    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:46:50.237565    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:50.237565    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:46:50.237760    1540 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0706 20:46:50.237804    1540 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0706 20:46:50.237861    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:46:50.380629    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:50.380739    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:50.380739    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:50.380739    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:50.384357    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:46:50.385269    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:50.385269    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:50.385269    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:50.385269    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:50.385269    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:50.385269    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:50 GMT
	I0706 20:46:50.385269    1540 round_trippers.go:580]     Audit-Id: dd0284b6-4456-4e0b-b955-60755657f8cf
	I0706 20:46:50.385740    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:50.890512    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:50.890592    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:50.890592    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:50.890592    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:50.894420    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:46:50.894905    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:50.894905    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:50.894905    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:50.894905    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:50.895036    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:50.895036    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:50 GMT
	I0706 20:46:50.895036    1540 round_trippers.go:580]     Audit-Id: 5fbee1f7-7b98-47d1-a4f6-b0a0e410c2bc
	I0706 20:46:50.895036    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:50.983955    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:46:50.984273    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:50.984273    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:46:51.300726    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:46:51.300726    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:51.301135    1540 sshutil.go:53] new ssh client: &{IP:172.29.70.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:46:51.379489    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:51.379558    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:51.379558    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:51.379558    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:51.383849    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:46:51.384319    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:51.384319    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:51.384319    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:51 GMT
	I0706 20:46:51.384417    1540 round_trippers.go:580]     Audit-Id: 075e42d6-6886-4aa6-a1ea-be4de5b322c6
	I0706 20:46:51.384417    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:51.384417    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:51.384417    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:51.384417    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:51.440441    1540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0706 20:46:51.884417    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:51.884417    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:51.884417    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:51.884417    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:51.887990    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:46:51.887990    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:51.887990    1540 round_trippers.go:580]     Audit-Id: 8d1ffa12-40b8-4236-92e0-e8bf23a81082
	I0706 20:46:51.887990    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:51.887990    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:51.887990    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:51.887990    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:51.887990    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:51 GMT
	I0706 20:46:51.887990    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:51.887990    1540 node_ready.go:58] node "multinode-144300" has status "Ready":"False"
	I0706 20:46:51.962628    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:46:51.962674    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:46:51.963080    1540 sshutil.go:53] new ssh client: &{IP:172.29.70.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:46:52.122342    1540 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0706 20:46:52.122463    1540 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0706 20:46:52.122463    1540 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0706 20:46:52.122463    1540 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0706 20:46:52.122463    1540 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0706 20:46:52.122463    1540 command_runner.go:130] > pod/storage-provisioner created
	I0706 20:46:52.155605    1540 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0706 20:46:52.388576    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:52.388625    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:52.388625    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:52.388625    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:52.393590    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:46:52.393590    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:52.393590    1540 round_trippers.go:580]     Audit-Id: 54e8e0a8-fbee-47d0-a7bf-855d6a5c8e60
	I0706 20:46:52.393590    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:52.393590    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:52.393590    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:52.393590    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:52.393590    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:52 GMT
	I0706 20:46:52.393590    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:52.485985    1540 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0706 20:46:52.490244    1540 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0706 20:46:52.492571    1540 addons.go:499] enable addons completed in 3.8150222s: enabled=[storage-provisioner default-storageclass]
	I0706 20:46:52.889457    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:52.889534    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:52.889534    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:52.889534    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:52.893922    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:46:52.894830    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:52.894875    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:52.894875    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:52.894875    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:52.894875    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:52 GMT
	I0706 20:46:52.894989    1540 round_trippers.go:580]     Audit-Id: af009933-b74a-4e05-967a-9ec7c3002bc0
	I0706 20:46:52.894989    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:52.895142    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:53.389653    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:53.389653    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:53.389653    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:53.389738    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:53.393474    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:46:53.393474    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:53.393474    1540 round_trippers.go:580]     Audit-Id: 639c1d9e-d3bc-40ff-995e-07ad0393c159
	I0706 20:46:53.393474    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:53.394255    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:53.394255    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:53.394255    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:53.394255    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:53 GMT
	I0706 20:46:53.394670    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:53.889506    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:53.889506    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:53.889577    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:53.889577    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:53.892938    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:46:53.893159    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:53.893159    1540 round_trippers.go:580]     Audit-Id: ab1d7ffc-5abb-48a7-a8be-51747a01e5b6
	I0706 20:46:53.893159    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:53.893159    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:53.893159    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:53.893159    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:53.893159    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:53 GMT
	I0706 20:46:53.893404    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:53.893826    1540 node_ready.go:58] node "multinode-144300" has status "Ready":"False"
	I0706 20:46:54.391771    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:54.391771    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:54.391866    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:54.391866    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:54.396375    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:46:54.396375    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:54.396375    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:54.396375    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:54 GMT
	I0706 20:46:54.396375    1540 round_trippers.go:580]     Audit-Id: 297ac6bd-de4e-4c27-9e80-7e3b9ebc7515
	I0706 20:46:54.396375    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:54.396375    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:54.396375    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:54.396600    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:54.893953    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:54.894042    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:54.894042    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:54.894042    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:54.898426    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:46:54.898426    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:54.898517    1540 round_trippers.go:580]     Audit-Id: 981e66e5-b492-452e-bd56-150f1db428db
	I0706 20:46:54.898517    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:54.898517    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:54.898517    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:54.898517    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:54.898587    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:54 GMT
	I0706 20:46:54.898809    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:55.385385    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:55.385458    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:55.385458    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:55.385511    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:55.389385    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:46:55.389385    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:55.389385    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:55.389385    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:55.389385    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:55.389385    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:55 GMT
	I0706 20:46:55.389385    1540 round_trippers.go:580]     Audit-Id: a1847ac5-ba8a-4682-9117-7c728393307d
	I0706 20:46:55.389385    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:55.389385    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:55.880215    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:55.880215    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:55.880215    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:55.880215    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:55.884631    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:46:55.884631    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:55.884631    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:55 GMT
	I0706 20:46:55.884631    1540 round_trippers.go:580]     Audit-Id: fbe23d57-6a76-4f27-a5b7-6f6a39644f21
	I0706 20:46:55.884631    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:55.884631    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:55.884778    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:55.884778    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:55.884930    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:56.390008    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:56.390008    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:56.390008    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:56.390008    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:56.394954    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:46:56.395370    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:56.395493    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:56 GMT
	I0706 20:46:56.395556    1540 round_trippers.go:580]     Audit-Id: b8a840ae-e105-485b-8347-96cbab54a32c
	I0706 20:46:56.395556    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:56.395556    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:56.395556    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:56.395556    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:56.395556    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:56.396679    1540 node_ready.go:58] node "multinode-144300" has status "Ready":"False"
	I0706 20:46:56.893204    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:56.893204    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:56.893280    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:56.893280    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:56.918312    1540 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0706 20:46:56.918312    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:56.918312    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:56.918312    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:56.918312    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:56 GMT
	I0706 20:46:56.918312    1540 round_trippers.go:580]     Audit-Id: 90d75dbe-9158-4215-aab2-4ea2b34513c6
	I0706 20:46:56.918312    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:56.918312    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:56.919278    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:57.378508    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:57.378508    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:57.378508    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:57.378508    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:57.383525    1540 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:46:57.383849    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:57.383849    1540 round_trippers.go:580]     Audit-Id: 99c9ab0e-79dc-4e3a-b0cc-e7bf5b10ace4
	I0706 20:46:57.383849    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:57.383849    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:57.383849    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:57.383849    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:57.383849    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:57 GMT
	I0706 20:46:57.384090    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:57.878798    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:57.878942    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:57.878942    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:57.878942    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:57.889163    1540 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0706 20:46:57.889163    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:57.889163    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:57 GMT
	I0706 20:46:57.889163    1540 round_trippers.go:580]     Audit-Id: c0bbd95a-2154-427e-b392-58daa219f949
	I0706 20:46:57.889163    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:57.889163    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:57.889163    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:57.889163    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:57.889701    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:58.378770    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:58.378770    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:58.378770    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:58.378770    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:58.383506    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:46:58.383506    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:58.383506    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:58.383506    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:58.383506    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:58.383506    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:58 GMT
	I0706 20:46:58.383506    1540 round_trippers.go:580]     Audit-Id: c9625015-ff0f-4229-8805-3cb9f0aa0666
	I0706 20:46:58.383506    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:58.383506    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:58.883099    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:58.883203    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:58.883203    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:58.883203    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:58.887330    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:46:58.888001    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:58.888001    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:58.888001    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:58.888001    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:58 GMT
	I0706 20:46:58.888001    1540 round_trippers.go:580]     Audit-Id: 9f987359-91dc-4486-8919-d336dfb8fe7f
	I0706 20:46:58.888001    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:58.888001    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:58.888173    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:58.888742    1540 node_ready.go:58] node "multinode-144300" has status "Ready":"False"
	I0706 20:46:59.382994    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:59.382994    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:59.382994    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:59.383079    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:59.389628    1540 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:46:59.389628    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:59.389628    1540 round_trippers.go:580]     Audit-Id: 7021f69f-dc97-464c-9f89-ca295618f48a
	I0706 20:46:59.389628    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:59.389628    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:59.389628    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:59.389628    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:59.389628    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:59 GMT
	I0706 20:46:59.389628    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:46:59.882535    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:46:59.882612    1540 round_trippers.go:469] Request Headers:
	I0706 20:46:59.882674    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:46:59.882674    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:46:59.886082    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:46:59.886082    1540 round_trippers.go:577] Response Headers:
	I0706 20:46:59.886082    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:46:59.886082    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:46:59 GMT
	I0706 20:46:59.886082    1540 round_trippers.go:580]     Audit-Id: dd522582-f717-4c10-be5a-59901a038840
	I0706 20:46:59.886521    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:46:59.886521    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:46:59.886569    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:46:59.886879    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:47:00.381988    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:00.382104    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:00.382104    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:00.382104    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:00.386051    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:00.386051    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:00.386051    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:00.386051    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:00.386051    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:00.386051    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:00.386051    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:00 GMT
	I0706 20:47:00.386051    1540 round_trippers.go:580]     Audit-Id: 5b696506-5f2a-4732-990f-7b4e85be6f52
	I0706 20:47:00.386773    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:47:00.880561    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:00.880680    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:00.880680    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:00.880680    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:00.885240    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:00.885240    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:00.885240    1540 round_trippers.go:580]     Audit-Id: c7a558ef-5691-49cc-899a-f4c59be69674
	I0706 20:47:00.885240    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:00.885361    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:00.885361    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:00.885412    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:00.885446    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:00 GMT
	I0706 20:47:00.885761    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:47:01.380132    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:01.380132    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:01.380426    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:01.380501    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:01.383795    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:01.384598    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:01.384598    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:01.384598    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:01 GMT
	I0706 20:47:01.384598    1540 round_trippers.go:580]     Audit-Id: 48dd465a-50da-47ad-af7d-6b641ebf13ed
	I0706 20:47:01.384598    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:01.384598    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:01.384598    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:01.384941    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:47:01.385413    1540 node_ready.go:58] node "multinode-144300" has status "Ready":"False"
	I0706 20:47:01.879458    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:01.879526    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:01.879526    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:01.879526    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:01.883095    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:01.883095    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:01.883095    1540 round_trippers.go:580]     Audit-Id: a9bf89a9-1724-40bb-bdd2-37fc996dfa9c
	I0706 20:47:01.883095    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:01.883095    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:01.883095    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:01.883095    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:01.883095    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:01 GMT
	I0706 20:47:01.883095    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"361","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0706 20:47:02.379746    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:02.379836    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:02.379836    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:02.379836    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:02.386693    1540 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:47:02.386749    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:02.386749    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:02.386749    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:02 GMT
	I0706 20:47:02.386749    1540 round_trippers.go:580]     Audit-Id: 61e2d2f2-a3c4-48d0-a406-5d88ce649f12
	I0706 20:47:02.386749    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:02.386749    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:02.386749    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:02.387069    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:02.387630    1540 node_ready.go:49] node "multinode-144300" has status "Ready":"True"
	I0706 20:47:02.387814    1540 node_ready.go:38] duration metric: took 12.5155072s waiting for node "multinode-144300" to be "Ready" ...
	I0706 20:47:02.387853    1540 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 20:47:02.387957    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods
	I0706 20:47:02.387957    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:02.387957    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:02.387957    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:02.393576    1540 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:47:02.393576    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:02.393576    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:02.393576    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:02.393576    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:02.393576    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:02.393576    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:02 GMT
	I0706 20:47:02.393576    1540 round_trippers.go:580]     Audit-Id: 845499fd-f100-407e-8753-50a2e8c80f45
	I0706 20:47:02.394926    1540 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"437"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"436","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53973 chars]
	I0706 20:47:02.400041    1540 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:02.400272    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:47:02.400339    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:02.400339    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:02.400339    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:02.403455    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:02.403455    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:02.403611    1540 round_trippers.go:580]     Audit-Id: 3b20ca77-8e64-4c86-8b43-9bf4136f4c34
	I0706 20:47:02.403611    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:02.403611    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:02.403611    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:02.403611    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:02.403662    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:02 GMT
	I0706 20:47:02.403931    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"436","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0706 20:47:02.404706    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:02.404739    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:02.404739    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:02.404739    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:02.406893    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:47:02.406893    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:02.406893    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:02.406893    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:02 GMT
	I0706 20:47:02.407726    1540 round_trippers.go:580]     Audit-Id: 71d72c56-ab89-486a-858a-c2c1ced34978
	I0706 20:47:02.407726    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:02.407726    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:02.407726    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:02.407974    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:02.917158    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:47:02.917205    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:02.917205    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:02.917205    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:02.925028    1540 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0706 20:47:02.925028    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:02.925028    1540 round_trippers.go:580]     Audit-Id: 791b2a7f-0e7c-494b-8698-1bae40bab499
	I0706 20:47:02.925028    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:02.925028    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:02.925028    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:02.925028    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:02.925028    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:02 GMT
	I0706 20:47:02.925028    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"436","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0706 20:47:02.925839    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:02.925839    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:02.925839    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:02.925839    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:02.928904    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:02.929205    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:02.929205    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:02.929205    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:02.929205    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:02.929205    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:02 GMT
	I0706 20:47:02.929284    1540 round_trippers.go:580]     Audit-Id: f5ab5f86-2467-4d85-be7e-129d809b5bce
	I0706 20:47:02.929284    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:02.929660    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:03.408704    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:47:03.408704    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:03.408803    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:03.408803    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:03.414110    1540 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:47:03.414110    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:03.414110    1540 round_trippers.go:580]     Audit-Id: 628a43af-8d57-4e00-869f-807cc33b4bf6
	I0706 20:47:03.414110    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:03.414110    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:03.414110    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:03.414110    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:03.414110    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:03 GMT
	I0706 20:47:03.414110    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"436","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0706 20:47:03.415099    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:03.415099    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:03.415099    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:03.415099    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:03.419103    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:47:03.419917    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:03.419917    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:03.419917    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:03.419917    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:03.419917    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:03 GMT
	I0706 20:47:03.419917    1540 round_trippers.go:580]     Audit-Id: 89155952-a27d-43f2-92b9-4472099eaf45
	I0706 20:47:03.419917    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:03.420300    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:03.912071    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:47:03.912071    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:03.912071    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:03.912071    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:03.916298    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:47:03.916533    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:03.916533    1540 round_trippers.go:580]     Audit-Id: 36550b16-9a5e-42b7-b73e-8d41d86959eb
	I0706 20:47:03.916533    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:03.916533    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:03.916533    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:03.916533    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:03.916533    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:03 GMT
	I0706 20:47:03.916992    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"436","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0706 20:47:03.918801    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:03.918889    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:03.918889    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:03.918983    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:03.923935    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:47:03.924452    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:03.924452    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:03.924452    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:03.924452    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:03 GMT
	I0706 20:47:03.924452    1540 round_trippers.go:580]     Audit-Id: e69e8e7c-9509-4f2a-8016-d4cbc464a5ae
	I0706 20:47:03.924452    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:03.924452    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:03.924824    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:04.415705    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:47:04.415705    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.415846    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.415846    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.419570    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:47:04.419621    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.419621    1540 round_trippers.go:580]     Audit-Id: c610f706-44f8-4246-8be4-acd0bb44a320
	I0706 20:47:04.419621    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.419621    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.419621    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.419621    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.419753    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.420009    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"448","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0706 20:47:04.420724    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:04.420724    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.420724    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.420724    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.424156    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:04.424156    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.424156    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.424156    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.424214    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.424214    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.424214    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.424214    1540 round_trippers.go:580]     Audit-Id: 52f6121b-4063-4edc-892e-4ccc0144fa6f
	I0706 20:47:04.424266    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:04.424807    1540 pod_ready.go:92] pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace has status "Ready":"True"
	I0706 20:47:04.424807    1540 pod_ready.go:81] duration metric: took 2.0246446s waiting for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.424807    1540 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.425019    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-144300
	I0706 20:47:04.425019    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.425019    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.425084    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.427855    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:47:04.428465    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.428465    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.428465    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.428465    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.428465    1540 round_trippers.go:580]     Audit-Id: 12da6574-dca9-4538-965e-2f51f6c93fea
	I0706 20:47:04.428652    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.428652    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.428830    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-144300","namespace":"kube-system","uid":"368f429f-74ac-49a7-9c8f-89f95c37d31d","resourceVersion":"419","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.70.202:2379","kubernetes.io/config.hash":"e0089eceedc87039bc11bd2d8713b69e","kubernetes.io/config.mirror":"e0089eceedc87039bc11bd2d8713b69e","kubernetes.io/config.seen":"2023-07-06T20:46:36.035688887Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0706 20:47:04.429059    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:04.429059    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.429059    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.429059    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.432146    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:04.432146    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.432237    1540 round_trippers.go:580]     Audit-Id: f07522a8-9e8e-46c7-9149-d2147ed4e07d
	I0706 20:47:04.432237    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.432237    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.432237    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.432237    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.432313    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.432610    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:04.432855    1540 pod_ready.go:92] pod "etcd-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:47:04.432855    1540 pod_ready.go:81] duration metric: took 8.0487ms waiting for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.432855    1540 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.432855    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-144300
	I0706 20:47:04.432855    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.432855    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.432855    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.437892    1540 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:47:04.437892    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.437892    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.437892    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.437892    1540 round_trippers.go:580]     Audit-Id: b359903f-f7ae-451e-89a0-137d2c8086e1
	I0706 20:47:04.437892    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.437892    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.437892    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.438938    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-144300","namespace":"kube-system","uid":"a8848557-ed29-484b-9365-b07c4da9051f","resourceVersion":"423","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.70.202:8443","kubernetes.io/config.hash":"cde174f192a25fd146cf674bbcb8ed25","kubernetes.io/config.mirror":"cde174f192a25fd146cf674bbcb8ed25","kubernetes.io/config.seen":"2023-07-06T20:46:36.035683287Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0706 20:47:04.439084    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:04.439084    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.439084    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.439084    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.442872    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:04.443638    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.443638    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.443638    1540 round_trippers.go:580]     Audit-Id: 0f6082ff-ec93-47c2-9c9a-ab9212e8353e
	I0706 20:47:04.443638    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.443638    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.443638    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.443751    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.443887    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:04.444137    1540 pod_ready.go:92] pod "kube-apiserver-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:47:04.444137    1540 pod_ready.go:81] duration metric: took 11.2818ms waiting for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.444137    1540 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.444137    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-144300
	I0706 20:47:04.444137    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.444137    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.444137    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.447114    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:47:04.447114    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.447741    1540 round_trippers.go:580]     Audit-Id: cdfab577-eea4-40fa-96ce-15f878e275b9
	I0706 20:47:04.447741    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.447741    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.447798    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.447798    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.447798    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.448155    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-144300","namespace":"kube-system","uid":"d9a60269-68e9-4ea2-82fe-63cedee225ef","resourceVersion":"420","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.mirror":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.seen":"2023-07-06T20:46:36.035686687Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0706 20:47:04.448494    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:04.448494    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.448494    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.448494    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.451068    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:47:04.451068    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.451068    1540 round_trippers.go:580]     Audit-Id: 88003a28-6a90-412b-980c-1c4e2c12af6a
	I0706 20:47:04.451068    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.452072    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.452072    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.452072    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.452131    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.452213    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:04.452452    1540 pod_ready.go:92] pod "kube-controller-manager-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:47:04.452452    1540 pod_ready.go:81] duration metric: took 8.3143ms waiting for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.452452    1540 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.452452    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 20:47:04.452452    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.452452    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.452452    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.455749    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:04.455749    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.455749    1540 round_trippers.go:580]     Audit-Id: b66b6ee8-50ec-41c7-9e53-38226bb710d5
	I0706 20:47:04.455828    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.455828    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.455828    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.455828    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.455828    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.456015    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h6h62","generateName":"kube-proxy-","namespace":"kube-system","uid":"6949ff1e-f5c0-4ab2-ae7f-6b30775e220d","resourceVersion":"416","creationTimestamp":"2023-07-06T20:46:49Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0706 20:47:04.456487    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:04.456487    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.456487    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.456487    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.459824    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:04.459824    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.460367    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.460367    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.460367    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.460367    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.460367    1540 round_trippers.go:580]     Audit-Id: e43aa518-7ffc-47f1-b9a5-714a2b70f7a1
	I0706 20:47:04.460453    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.460844    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:04.461520    1540 pod_ready.go:92] pod "kube-proxy-h6h62" in "kube-system" namespace has status "Ready":"True"
	I0706 20:47:04.461520    1540 pod_ready.go:81] duration metric: took 9.0682ms waiting for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.461520    1540 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.618210    1540 request.go:628] Waited for 156.4537ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 20:47:04.618330    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 20:47:04.618330    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.618330    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.618330    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.621784    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:04.621829    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.621829    1540 round_trippers.go:580]     Audit-Id: 323a957b-53fe-4790-b94f-73b022b25449
	I0706 20:47:04.621829    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.621829    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.621829    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.621904    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.621904    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.622276    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-144300","namespace":"kube-system","uid":"70e904dd-fca0-436e-84d9-101fbc1cd9b0","resourceVersion":"421","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.mirror":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.seen":"2023-07-06T20:46:36.035687887Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0706 20:47:04.818984    1540 request.go:628] Waited for 195.9663ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:04.819134    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:47:04.819134    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.819134    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.819134    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.822680    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:04.822734    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.822734    1540 round_trippers.go:580]     Audit-Id: 43894655-5bf4-4584-9f36-f69d7e086fe5
	I0706 20:47:04.822734    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.822734    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.822734    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.822734    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.822734    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.822734    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0706 20:47:04.823376    1540 pod_ready.go:92] pod "kube-scheduler-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:47:04.823376    1540 pod_ready.go:81] duration metric: took 361.8533ms waiting for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:47:04.823376    1540 pod_ready.go:38] duration metric: took 2.4355048s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 20:47:04.823376    1540 api_server.go:52] waiting for apiserver process to appear ...
	I0706 20:47:04.834988    1540 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:47:04.854000    1540 command_runner.go:130] > 2000
	I0706 20:47:04.854036    1540 api_server.go:72] duration metric: took 15.509663s to wait for apiserver process to appear ...
	I0706 20:47:04.854101    1540 api_server.go:88] waiting for apiserver healthz status ...
	I0706 20:47:04.854131    1540 api_server.go:253] Checking apiserver healthz at https://172.29.70.202:8443/healthz ...
	I0706 20:47:04.862120    1540 api_server.go:279] https://172.29.70.202:8443/healthz returned 200:
	ok
	I0706 20:47:04.862120    1540 round_trippers.go:463] GET https://172.29.70.202:8443/version
	I0706 20:47:04.862754    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:04.862754    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:04.862754    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:04.863418    1540 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0706 20:47:04.864414    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:04.864414    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:04.864414    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:04.864414    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:04.864414    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:04.864414    1540 round_trippers.go:580]     Content-Length: 263
	I0706 20:47:04.864504    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:04 GMT
	I0706 20:47:04.864504    1540 round_trippers.go:580]     Audit-Id: d05f6d88-065e-4679-ba32-14afe0d1080e
	I0706 20:47:04.864504    1540 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0706 20:47:04.864624    1540 api_server.go:141] control plane version: v1.27.3
	I0706 20:47:04.864676    1540 api_server.go:131] duration metric: took 10.5223ms to wait for apiserver health ...
	I0706 20:47:04.864676    1540 system_pods.go:43] waiting for kube-system pods to appear ...
	I0706 20:47:05.019783    1540 request.go:628] Waited for 154.8112ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods
	I0706 20:47:05.019862    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods
	I0706 20:47:05.019862    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:05.019862    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:05.019862    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:05.024250    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:47:05.024250    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:05.024250    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:05.024250    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:05.025248    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:05.025248    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:05 GMT
	I0706 20:47:05.025276    1540 round_trippers.go:580]     Audit-Id: 10538b38-75f4-4f40-9291-09a285e5c2a5
	I0706 20:47:05.025276    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:05.027030    1540 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"448","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54089 chars]
	I0706 20:47:05.029323    1540 system_pods.go:59] 8 kube-system pods found
	I0706 20:47:05.029323    1540 system_pods.go:61] "coredns-5d78c9869d-m7j99" [dfa019d5-9528-4f25-8aab-03d1d276bb0c] Running
	I0706 20:47:05.029323    1540 system_pods.go:61] "etcd-multinode-144300" [368f429f-74ac-49a7-9c8f-89f95c37d31d] Running
	I0706 20:47:05.029323    1540 system_pods.go:61] "kindnet-9pjnm" [85523421-1320-4587-ba8c-cbb357ee7eb1] Running
	I0706 20:47:05.029323    1540 system_pods.go:61] "kube-apiserver-multinode-144300" [a8848557-ed29-484b-9365-b07c4da9051f] Running
	I0706 20:47:05.029468    1540 system_pods.go:61] "kube-controller-manager-multinode-144300" [d9a60269-68e9-4ea2-82fe-63cedee225ef] Running
	I0706 20:47:05.029468    1540 system_pods.go:61] "kube-proxy-h6h62" [6949ff1e-f5c0-4ab2-ae7f-6b30775e220d] Running
	I0706 20:47:05.029468    1540 system_pods.go:61] "kube-scheduler-multinode-144300" [70e904dd-fca0-436e-84d9-101fbc1cd9b0] Running
	I0706 20:47:05.029468    1540 system_pods.go:61] "storage-provisioner" [75b208e7-5f24-4849-867c-c7fa45213999] Running
	I0706 20:47:05.029468    1540 system_pods.go:74] duration metric: took 164.7906ms to wait for pod list to return data ...
	I0706 20:47:05.029468    1540 default_sa.go:34] waiting for default service account to be created ...
	I0706 20:47:05.224588    1540 request.go:628] Waited for 194.7005ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/namespaces/default/serviceaccounts
	I0706 20:47:05.224588    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/default/serviceaccounts
	I0706 20:47:05.224588    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:05.224752    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:05.224752    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:05.228270    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:47:05.228270    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:05.228948    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:05.228948    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:05.228948    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:05.228948    1540 round_trippers.go:580]     Content-Length: 261
	I0706 20:47:05.228948    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:05 GMT
	I0706 20:47:05.228948    1540 round_trippers.go:580]     Audit-Id: a3c98b7a-fed1-44be-a681-1d53e4d84173
	I0706 20:47:05.228948    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:05.229035    1540 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"47609419-2a68-437e-86dd-3015903126c0","resourceVersion":"333","creationTimestamp":"2023-07-06T20:46:48Z"}}]}
	I0706 20:47:05.229313    1540 default_sa.go:45] found service account: "default"
	I0706 20:47:05.229440    1540 default_sa.go:55] duration metric: took 199.8442ms for default service account to be created ...
	I0706 20:47:05.229440    1540 system_pods.go:116] waiting for k8s-apps to be running ...
	I0706 20:47:05.426538    1540 request.go:628] Waited for 196.867ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods
	I0706 20:47:05.426893    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods
	I0706 20:47:05.426893    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:05.426893    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:05.426893    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:05.438504    1540 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0706 20:47:05.438504    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:05.438504    1540 round_trippers.go:580]     Audit-Id: 2b74be22-5d66-45f9-8887-d79ca307b856
	I0706 20:47:05.438504    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:05.438504    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:05.439479    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:05.439479    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:05.439479    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:05 GMT
	I0706 20:47:05.441076    1540 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"453"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"448","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54089 chars]
	I0706 20:47:05.443402    1540 system_pods.go:86] 8 kube-system pods found
	I0706 20:47:05.443402    1540 system_pods.go:89] "coredns-5d78c9869d-m7j99" [dfa019d5-9528-4f25-8aab-03d1d276bb0c] Running
	I0706 20:47:05.443402    1540 system_pods.go:89] "etcd-multinode-144300" [368f429f-74ac-49a7-9c8f-89f95c37d31d] Running
	I0706 20:47:05.443402    1540 system_pods.go:89] "kindnet-9pjnm" [85523421-1320-4587-ba8c-cbb357ee7eb1] Running
	I0706 20:47:05.443402    1540 system_pods.go:89] "kube-apiserver-multinode-144300" [a8848557-ed29-484b-9365-b07c4da9051f] Running
	I0706 20:47:05.443402    1540 system_pods.go:89] "kube-controller-manager-multinode-144300" [d9a60269-68e9-4ea2-82fe-63cedee225ef] Running
	I0706 20:47:05.443402    1540 system_pods.go:89] "kube-proxy-h6h62" [6949ff1e-f5c0-4ab2-ae7f-6b30775e220d] Running
	I0706 20:47:05.443402    1540 system_pods.go:89] "kube-scheduler-multinode-144300" [70e904dd-fca0-436e-84d9-101fbc1cd9b0] Running
	I0706 20:47:05.443402    1540 system_pods.go:89] "storage-provisioner" [75b208e7-5f24-4849-867c-c7fa45213999] Running
	I0706 20:47:05.443402    1540 system_pods.go:126] duration metric: took 213.961ms to wait for k8s-apps to be running ...
	I0706 20:47:05.443402    1540 system_svc.go:44] waiting for kubelet service to be running ....
	I0706 20:47:05.452155    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:47:05.482246    1540 system_svc.go:56] duration metric: took 38.8429ms WaitForService to wait for kubelet.
	I0706 20:47:05.482311    1540 kubeadm.go:581] duration metric: took 16.1379329s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0706 20:47:05.482364    1540 node_conditions.go:102] verifying NodePressure condition ...
	I0706 20:47:05.615903    1540 request.go:628] Waited for 133.3219ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/nodes
	I0706 20:47:05.615903    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes
	I0706 20:47:05.616176    1540 round_trippers.go:469] Request Headers:
	I0706 20:47:05.616176    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:47:05.616218    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:47:05.620537    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:47:05.620537    1540 round_trippers.go:577] Response Headers:
	I0706 20:47:05.620537    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:47:05.620627    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:47:05.620627    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:47:05.620627    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:47:05 GMT
	I0706 20:47:05.620657    1540 round_trippers.go:580]     Audit-Id: 720eeafb-d810-4505-8aba-eec1da2e12c2
	I0706 20:47:05.620657    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:47:05.620807    1540 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"454"},"items":[{"metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"431","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0706 20:47:05.620959    1540 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:47:05.620959    1540 node_conditions.go:123] node cpu capacity is 2
	I0706 20:47:05.621488    1540 node_conditions.go:105] duration metric: took 139.1234ms to run NodePressure ...
	I0706 20:47:05.621542    1540 start.go:228] waiting for startup goroutines ...
	I0706 20:47:05.621542    1540 start.go:233] waiting for cluster config update ...
	I0706 20:47:05.621542    1540 start.go:242] writing updated cluster config ...
	I0706 20:47:05.625648    1540 out.go:177] 
	I0706 20:47:05.636379    1540 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:47:05.636768    1540 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:47:05.641070    1540 out.go:177] * Starting worker node multinode-144300-m02 in cluster multinode-144300
	I0706 20:47:05.645003    1540 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 20:47:05.645003    1540 cache.go:57] Caching tarball of preloaded images
	I0706 20:47:05.645003    1540 preload.go:174] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0706 20:47:05.645003    1540 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 20:47:05.646007    1540 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:47:05.647668    1540 start.go:365] acquiring machines lock for multinode-144300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 20:47:05.648714    1540 start.go:369] acquired machines lock for "multinode-144300-m02" in 1.0456ms
	I0706 20:47:05.648838    1540 start.go:93] Provisioning new machine with config: &{Name:multinode-144300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.70.202 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeReq
uested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name:m02 IP: Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0706 20:47:05.648950    1540 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0706 20:47:05.652195    1540 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0706 20:47:05.652195    1540 start.go:159] libmachine.API.Create for "multinode-144300" (driver="hyperv")
	I0706 20:47:05.652195    1540 client.go:168] LocalClient.Create starting
	I0706 20:47:05.652974    1540 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0706 20:47:05.653299    1540 main.go:141] libmachine: Decoding PEM data...
	I0706 20:47:05.653299    1540 main.go:141] libmachine: Parsing certificate...
	I0706 20:47:05.653513    1540 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0706 20:47:05.653706    1540 main.go:141] libmachine: Decoding PEM data...
	I0706 20:47:05.653706    1540 main.go:141] libmachine: Parsing certificate...
	I0706 20:47:05.653706    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0706 20:47:06.040832    1540 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0706 20:47:06.040832    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:06.040934    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0706 20:47:06.636037    1540 main.go:141] libmachine: [stdout =====>] : False
	
	I0706 20:47:06.636201    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:06.636260    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0706 20:47:07.090950    1540 main.go:141] libmachine: [stdout =====>] : True
	
	I0706 20:47:07.091167    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:07.091270    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0706 20:47:08.499547    1540 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0706 20:47:08.499890    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:08.502821    1540 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.30.1-1688144767-16765-amd64.iso...
	I0706 20:47:08.864859    1540 main.go:141] libmachine: Creating SSH key...
	I0706 20:47:09.021501    1540 main.go:141] libmachine: Creating VM...
	I0706 20:47:09.021501    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0706 20:47:10.315584    1540 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0706 20:47:10.315584    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:10.315791    1540 main.go:141] libmachine: Using switch "Default Switch"
	I0706 20:47:10.315938    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0706 20:47:10.872392    1540 main.go:141] libmachine: [stdout =====>] : True
	
	I0706 20:47:10.872772    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:10.872772    1540 main.go:141] libmachine: Creating VHD
	I0706 20:47:10.872874    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0706 20:47:12.500886    1540 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 669B1F24-43B9-4258-A216-0FE28A9551B7
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0706 20:47:12.501158    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:12.501158    1540 main.go:141] libmachine: Writing magic tar header
	I0706 20:47:12.501473    1540 main.go:141] libmachine: Writing SSH key tar header
	I0706 20:47:12.509521    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0706 20:47:14.164620    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:14.164884    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:14.164884    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\disk.vhd' -SizeBytes 20000MB
	I0706 20:47:15.282114    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:15.282114    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:15.282666    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-144300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0706 20:47:17.114949    1540 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-144300-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0706 20:47:17.114949    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:17.115050    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-144300-m02 -DynamicMemoryEnabled $false
	I0706 20:47:17.858770    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:17.858770    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:17.858874    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-144300-m02 -Count 2
	I0706 20:47:18.632343    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:18.632343    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:18.632343    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-144300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\boot2docker.iso'
	I0706 20:47:19.701898    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:19.701898    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:19.701898    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-144300-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\disk.vhd'
	I0706 20:47:20.825609    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:20.825778    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:20.825778    1540 main.go:141] libmachine: Starting VM...
	I0706 20:47:20.825864    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-144300-m02
	I0706 20:47:22.416332    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:22.416488    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:22.416488    1540 main.go:141] libmachine: Waiting for host to start...
	I0706 20:47:22.416518    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:23.109693    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:23.109693    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:23.109693    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:24.064782    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:24.065481    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:25.081335    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:25.766366    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:25.766366    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:25.766366    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:26.732252    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:26.732252    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:27.735618    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:28.421238    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:28.421467    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:28.421671    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:29.374660    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:29.374660    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:30.376418    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:31.089725    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:31.089957    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:31.090018    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:32.069199    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:32.069381    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:33.071773    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:33.783840    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:33.783840    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:33.784018    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:34.767041    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:34.767114    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:35.778175    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:36.442760    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:36.442760    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:36.442840    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:37.393945    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:37.393999    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:38.407693    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:39.090564    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:39.090703    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:39.090882    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:40.006950    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:40.007009    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:41.011306    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:41.686761    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:41.687025    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:41.687166    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:42.640302    1540 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:47:42.640461    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:43.655129    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:44.337076    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:44.337076    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:44.337076    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:45.314069    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:47:45.314102    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:45.314155    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:46.024145    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:46.024145    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:46.024223    1540 machine.go:88] provisioning docker machine ...
	I0706 20:47:46.024223    1540 buildroot.go:166] provisioning hostname "multinode-144300-m02"
	I0706 20:47:46.024304    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:46.740350    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:46.740488    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:46.740488    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:47.718348    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:47:47.718348    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:47.722761    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:47:47.730821    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.79.241 22 <nil> <nil>}
	I0706 20:47:47.730821    1540 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-144300-m02 && echo "multinode-144300-m02" | sudo tee /etc/hostname
	I0706 20:47:47.886080    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-144300-m02
	
	I0706 20:47:47.886080    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:48.562137    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:48.562137    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:48.562204    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:49.511050    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:47:49.511083    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:49.515177    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:47:49.516010    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.79.241 22 <nil> <nil>}
	I0706 20:47:49.516010    1540 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-144300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-144300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-144300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 20:47:49.670242    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 20:47:49.670242    1540 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0706 20:47:49.670242    1540 buildroot.go:174] setting up certificates
	I0706 20:47:49.670242    1540 provision.go:83] configureAuth start
	I0706 20:47:49.670242    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:50.334424    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:50.334424    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:50.334503    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:51.298815    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:47:51.299008    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:51.299224    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:52.011374    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:52.011658    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:52.011658    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:53.035241    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:47:53.035241    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:53.035512    1540 provision.go:138] copyHostCerts
	I0706 20:47:53.035714    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0706 20:47:53.036032    1540 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0706 20:47:53.036032    1540 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0706 20:47:53.036432    1540 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0706 20:47:53.037824    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0706 20:47:53.038089    1540 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0706 20:47:53.038089    1540 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0706 20:47:53.038647    1540 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0706 20:47:53.039255    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0706 20:47:53.039930    1540 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0706 20:47:53.039982    1540 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0706 20:47:53.040351    1540 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0706 20:47:53.041773    1540 provision.go:112] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-144300-m02 san=[172.29.79.241 172.29.79.241 localhost 127.0.0.1 minikube multinode-144300-m02]
	I0706 20:47:53.290030    1540 provision.go:172] copyRemoteCerts
	I0706 20:47:53.300924    1540 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 20:47:53.300924    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:54.016834    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:54.017048    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:54.017136    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:55.030428    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:47:55.030428    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:55.030890    1540 sshutil.go:53] new ssh client: &{IP:172.29.79.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\id_rsa Username:docker}
	I0706 20:47:55.139436    1540 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.8384988s)
	I0706 20:47:55.139490    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0706 20:47:55.139959    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 20:47:55.175991    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0706 20:47:55.176468    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0706 20:47:55.216298    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0706 20:47:55.216665    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0706 20:47:55.256577    1540 provision.go:86] duration metric: configureAuth took 5.5862939s
	I0706 20:47:55.256632    1540 buildroot.go:189] setting minikube options for container-runtime
	I0706 20:47:55.257144    1540 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:47:55.257251    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:55.944591    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:55.944591    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:55.944702    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:56.889150    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:47:56.889150    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:56.893720    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:47:56.894287    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.79.241 22 <nil> <nil>}
	I0706 20:47:56.894813    1540 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 20:47:57.036608    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 20:47:57.036608    1540 buildroot.go:70] root file system type: tmpfs
	I0706 20:47:57.036790    1540 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 20:47:57.036882    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:57.707624    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:57.707624    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:57.707694    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:47:58.677195    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:47:58.677195    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:58.680914    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:47:58.681663    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.79.241 22 <nil> <nil>}
	I0706 20:47:58.681821    1540 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.70.202"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 20:47:58.840398    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.70.202
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 20:47:58.840459    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:47:59.507665    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:47:59.507665    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:47:59.508030    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:48:00.472970    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:48:00.473132    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:00.477527    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:48:00.477903    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.79.241 22 <nil> <nil>}
	I0706 20:48:00.478504    1540 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 20:48:01.527424    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 20:48:01.527424    1540 machine.go:91] provisioned docker machine in 15.5030865s
	I0706 20:48:01.527424    1540 client.go:171] LocalClient.Create took 55.874237s
	I0706 20:48:01.527424    1540 start.go:167] duration metric: libmachine.API.Create for "multinode-144300" took 55.8748157s
	I0706 20:48:01.527424    1540 start.go:300] post-start starting for "multinode-144300-m02" (driver="hyperv")
	I0706 20:48:01.527424    1540 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 20:48:01.537180    1540 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 20:48:01.537180    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:48:02.199883    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:48:02.199883    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:02.200008    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:48:03.147604    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:48:03.147604    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:03.147843    1540 sshutil.go:53] new ssh client: &{IP:172.29.79.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\id_rsa Username:docker}
	I0706 20:48:03.256002    1540 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.7188085s)
	I0706 20:48:03.265636    1540 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 20:48:03.272299    1540 command_runner.go:130] > NAME=Buildroot
	I0706 20:48:03.272299    1540 command_runner.go:130] > VERSION=2021.02.12-1-g6f2898e-dirty
	I0706 20:48:03.272299    1540 command_runner.go:130] > ID=buildroot
	I0706 20:48:03.272299    1540 command_runner.go:130] > VERSION_ID=2021.02.12
	I0706 20:48:03.272299    1540 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0706 20:48:03.272299    1540 info.go:137] Remote host: Buildroot 2021.02.12
	I0706 20:48:03.272299    1540 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0706 20:48:03.273012    1540 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0706 20:48:03.273728    1540 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> 82562.pem in /etc/ssl/certs
	I0706 20:48:03.273728    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /etc/ssl/certs/82562.pem
	I0706 20:48:03.284178    1540 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 20:48:03.298990    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /etc/ssl/certs/82562.pem (1708 bytes)
	I0706 20:48:03.335379    1540 start.go:303] post-start completed in 1.8078602s
	I0706 20:48:03.338016    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:48:04.014614    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:48:04.014614    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:04.014730    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:48:05.004121    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:48:05.004510    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:05.004627    1540 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:48:05.007257    1540 start.go:128] duration metric: createHost completed in 59.3578685s
	I0706 20:48:05.007257    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:48:05.686775    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:48:05.686775    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:05.686881    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:48:06.664453    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:48:06.664453    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:06.668437    1540 main.go:141] libmachine: Using SSH client type: native
	I0706 20:48:06.669665    1540 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.79.241 22 <nil> <nil>}
	I0706 20:48:06.669665    1540 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0706 20:48:06.812195    1540 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688676486.812361051
	
	I0706 20:48:06.812250    1540 fix.go:206] guest clock: 1688676486.812361051
	I0706 20:48:06.812250    1540 fix.go:219] Guest: 2023-07-06 20:48:06.812361051 +0000 UTC Remote: 2023-07-06 20:48:05.0072577 +0000 UTC m=+187.584665401 (delta=1.805103351s)
	I0706 20:48:06.812250    1540 fix.go:190] guest clock delta is within tolerance: 1.805103351s
	I0706 20:48:06.812250    1540 start.go:83] releasing machines lock for "multinode-144300-m02", held for 1m1.1630836s
	I0706 20:48:06.812516    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:48:07.504958    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:48:07.504958    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:07.505038    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:48:08.515960    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:48:08.516268    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:08.519258    1540 out.go:177] * Found network options:
	I0706 20:48:08.521804    1540 out.go:177]   - NO_PROXY=172.29.70.202
	W0706 20:48:08.525644    1540 proxy.go:119] fail to check proxy env: Error ip not in block
	I0706 20:48:08.528544    1540 out.go:177]   - no_proxy=172.29.70.202
	W0706 20:48:08.531303    1540 proxy.go:119] fail to check proxy env: Error ip not in block
	W0706 20:48:08.532829    1540 proxy.go:119] fail to check proxy env: Error ip not in block
	I0706 20:48:08.534858    1540 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 20:48:08.535012    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:48:08.542674    1540 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0706 20:48:08.542674    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:48:09.275236    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:48:09.275236    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:09.275349    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:48:09.275349    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:09.275349    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:48:09.275349    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:48:10.372042    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:48:10.372120    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:10.372337    1540 sshutil.go:53] new ssh client: &{IP:172.29.79.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\id_rsa Username:docker}
	I0706 20:48:10.392162    1540 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:48:10.392162    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:10.392480    1540 sshutil.go:53] new ssh client: &{IP:172.29.79.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\id_rsa Username:docker}
	I0706 20:48:10.471951    1540 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0706 20:48:10.472188    1540 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (1.9294052s)
	W0706 20:48:10.472188    1540 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 20:48:10.481227    1540 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0706 20:48:10.553645    1540 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0706 20:48:10.553645    1540 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.0187274s)
	I0706 20:48:10.553750    1540 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0706 20:48:10.553750    1540 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0706 20:48:10.553829    1540 start.go:466] detecting cgroup driver to use...
	I0706 20:48:10.553981    1540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 20:48:10.580614    1540 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0706 20:48:10.591741    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0706 20:48:10.616221    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 20:48:10.632365    1540 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 20:48:10.641538    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 20:48:10.665099    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 20:48:10.688286    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 20:48:10.713536    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 20:48:10.739780    1540 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 20:48:10.767637    1540 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 20:48:10.792090    1540 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 20:48:10.806165    1540 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0706 20:48:10.814302    1540 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 20:48:10.837146    1540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:48:10.991409    1540 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 20:48:11.018089    1540 start.go:466] detecting cgroup driver to use...
	I0706 20:48:11.028459    1540 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 20:48:11.049152    1540 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0706 20:48:11.049152    1540 command_runner.go:130] > [Unit]
	I0706 20:48:11.049152    1540 command_runner.go:130] > Description=Docker Application Container Engine
	I0706 20:48:11.049152    1540 command_runner.go:130] > Documentation=https://docs.docker.com
	I0706 20:48:11.049152    1540 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0706 20:48:11.049152    1540 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0706 20:48:11.049152    1540 command_runner.go:130] > StartLimitBurst=3
	I0706 20:48:11.049152    1540 command_runner.go:130] > StartLimitIntervalSec=60
	I0706 20:48:11.049152    1540 command_runner.go:130] > [Service]
	I0706 20:48:11.049152    1540 command_runner.go:130] > Type=notify
	I0706 20:48:11.049152    1540 command_runner.go:130] > Restart=on-failure
	I0706 20:48:11.049152    1540 command_runner.go:130] > Environment=NO_PROXY=172.29.70.202
	I0706 20:48:11.049152    1540 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0706 20:48:11.049152    1540 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0706 20:48:11.049152    1540 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0706 20:48:11.049152    1540 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0706 20:48:11.049152    1540 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0706 20:48:11.049152    1540 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0706 20:48:11.049152    1540 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0706 20:48:11.049152    1540 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0706 20:48:11.049152    1540 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0706 20:48:11.049152    1540 command_runner.go:130] > ExecStart=
	I0706 20:48:11.049152    1540 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0706 20:48:11.049751    1540 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0706 20:48:11.049751    1540 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0706 20:48:11.049751    1540 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0706 20:48:11.049751    1540 command_runner.go:130] > LimitNOFILE=infinity
	I0706 20:48:11.049751    1540 command_runner.go:130] > LimitNPROC=infinity
	I0706 20:48:11.049751    1540 command_runner.go:130] > LimitCORE=infinity
	I0706 20:48:11.049751    1540 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0706 20:48:11.049751    1540 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0706 20:48:11.049751    1540 command_runner.go:130] > TasksMax=infinity
	I0706 20:48:11.049751    1540 command_runner.go:130] > TimeoutStartSec=0
	I0706 20:48:11.049751    1540 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0706 20:48:11.049751    1540 command_runner.go:130] > Delegate=yes
	I0706 20:48:11.050038    1540 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0706 20:48:11.050186    1540 command_runner.go:130] > KillMode=process
	I0706 20:48:11.050186    1540 command_runner.go:130] > [Install]
	I0706 20:48:11.050186    1540 command_runner.go:130] > WantedBy=multi-user.target
	I0706 20:48:11.059396    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 20:48:11.086408    1540 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 20:48:11.120436    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 20:48:11.148285    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 20:48:11.174268    1540 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0706 20:48:11.229044    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 20:48:11.249573    1540 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 20:48:11.280512    1540 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0706 20:48:11.290459    1540 ssh_runner.go:195] Run: which cri-dockerd
	I0706 20:48:11.295500    1540 command_runner.go:130] > /usr/bin/cri-dockerd
	I0706 20:48:11.304119    1540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 20:48:11.318459    1540 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 20:48:11.353316    1540 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 20:48:11.515914    1540 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 20:48:11.677811    1540 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 20:48:11.677900    1540 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 20:48:11.713790    1540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:48:11.862959    1540 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 20:48:13.351727    1540 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.4886251s)
	I0706 20:48:13.362211    1540 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 20:48:13.509687    1540 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0706 20:48:13.649984    1540 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 20:48:13.798633    1540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:48:13.942062    1540 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0706 20:48:13.974840    1540 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:48:14.131668    1540 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0706 20:48:14.243684    1540 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0706 20:48:14.253873    1540 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0706 20:48:14.262918    1540 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0706 20:48:14.262918    1540 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0706 20:48:14.262918    1540 command_runner.go:130] > Device: 16h/22d	Inode: 955         Links: 1
	I0706 20:48:14.262918    1540 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0706 20:48:14.262918    1540 command_runner.go:130] > Access: 2023-07-06 20:48:14.158037318 +0000
	I0706 20:48:14.262918    1540 command_runner.go:130] > Modify: 2023-07-06 20:48:14.158037318 +0000
	I0706 20:48:14.262918    1540 command_runner.go:130] > Change: 2023-07-06 20:48:14.163037600 +0000
	I0706 20:48:14.262918    1540 command_runner.go:130] >  Birth: -
	I0706 20:48:14.262918    1540 start.go:534] Will wait 60s for crictl version
	I0706 20:48:14.272493    1540 ssh_runner.go:195] Run: which crictl
	I0706 20:48:14.277618    1540 command_runner.go:130] > /usr/bin/crictl
	I0706 20:48:14.285315    1540 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0706 20:48:14.332842    1540 command_runner.go:130] > Version:  0.1.0
	I0706 20:48:14.332842    1540 command_runner.go:130] > RuntimeName:  docker
	I0706 20:48:14.332842    1540 command_runner.go:130] > RuntimeVersion:  24.0.2
	I0706 20:48:14.332842    1540 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0706 20:48:14.332951    1540 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0706 20:48:14.339251    1540 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 20:48:14.368044    1540 command_runner.go:130] > 24.0.2
	I0706 20:48:14.375303    1540 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 20:48:14.407249    1540 command_runner.go:130] > 24.0.2
	I0706 20:48:14.413592    1540 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0706 20:48:14.416883    1540 out.go:177]   - env NO_PROXY=172.29.70.202
	I0706 20:48:14.419168    1540 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0706 20:48:14.422659    1540 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0706 20:48:14.422659    1540 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0706 20:48:14.422659    1540 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0706 20:48:14.422659    1540 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:76:79 Flags:up|broadcast|multicast|running}
	I0706 20:48:14.424578    1540 ip.go:210] interface addr: fe80::9492:57c6:5513:d3cc/64
	I0706 20:48:14.424578    1540 ip.go:210] interface addr: 172.29.64.1/20
	I0706 20:48:14.433553    1540 ssh_runner.go:195] Run: grep 172.29.64.1	host.minikube.internal$ /etc/hosts
	I0706 20:48:14.438013    1540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 20:48:14.457067    1540 certs.go:56] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300 for IP: 172.29.79.241
	I0706 20:48:14.457115    1540 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:48:14.457749    1540 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0706 20:48:14.457854    1540 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0706 20:48:14.457854    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0706 20:48:14.458422    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0706 20:48:14.458973    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0706 20:48:14.459193    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0706 20:48:14.459778    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem (1338 bytes)
	W0706 20:48:14.460077    1540 certs.go:433] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256_empty.pem, impossibly tiny 0 bytes
	I0706 20:48:14.460213    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0706 20:48:14.460507    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0706 20:48:14.460789    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0706 20:48:14.461081    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0706 20:48:14.461637    1540 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem (1708 bytes)
	I0706 20:48:14.461895    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:48:14.462008    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem -> /usr/share/ca-certificates/8256.pem
	I0706 20:48:14.462238    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /usr/share/ca-certificates/82562.pem
	I0706 20:48:14.462923    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0706 20:48:14.500268    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0706 20:48:14.534715    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0706 20:48:14.567273    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0706 20:48:14.601445    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0706 20:48:14.636228    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem --> /usr/share/ca-certificates/8256.pem (1338 bytes)
	I0706 20:48:14.669547    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /usr/share/ca-certificates/82562.pem (1708 bytes)
	I0706 20:48:14.709923    1540 ssh_runner.go:195] Run: openssl version
	I0706 20:48:14.716637    1540 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0706 20:48:14.725772    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8256.pem && ln -fs /usr/share/ca-certificates/8256.pem /etc/ssl/certs/8256.pem"
	I0706 20:48:14.748509    1540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8256.pem
	I0706 20:48:14.754212    1540 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 20:48:14.754212    1540 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 20:48:14.762049    1540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8256.pem
	I0706 20:48:14.769188    1540 command_runner.go:130] > 51391683
	I0706 20:48:14.777467    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8256.pem /etc/ssl/certs/51391683.0"
	I0706 20:48:14.802206    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82562.pem && ln -fs /usr/share/ca-certificates/82562.pem /etc/ssl/certs/82562.pem"
	I0706 20:48:14.824867    1540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82562.pem
	I0706 20:48:14.830751    1540 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 20:48:14.831258    1540 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 20:48:14.839355    1540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82562.pem
	I0706 20:48:14.846120    1540 command_runner.go:130] > 3ec20f2e
	I0706 20:48:14.853554    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/82562.pem /etc/ssl/certs/3ec20f2e.0"
	I0706 20:48:14.875084    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0706 20:48:14.898444    1540 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:48:14.904246    1540 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:48:14.904370    1540 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:48:14.913061    1540 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:48:14.920322    1540 command_runner.go:130] > b5213941
	I0706 20:48:14.929187    1540 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0706 20:48:14.952689    1540 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0706 20:48:14.957768    1540 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 20:48:14.957768    1540 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 20:48:14.965626    1540 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0706 20:48:14.996335    1540 command_runner.go:130] > cgroupfs
	I0706 20:48:14.996335    1540 cni.go:84] Creating CNI manager for ""
	I0706 20:48:14.996335    1540 cni.go:137] 2 nodes found, recommending kindnet
	I0706 20:48:14.996335    1540 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0706 20:48:14.996335    1540 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.79.241 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-144300 NodeName:multinode-144300-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.70.202"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.79.241 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0706 20:48:14.996335    1540 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.79.241
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-144300-m02"
	  kubeletExtraArgs:
	    node-ip: 172.29.79.241
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.70.202"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0706 20:48:14.996882    1540 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-144300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.79.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0706 20:48:15.007142    1540 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0706 20:48:15.021799    1540 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.27.3': No such file or directory
	I0706 20:48:15.022698    1540 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.27.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.27.3': No such file or directory
	
	Initiating transfer...
	I0706 20:48:15.032077    1540 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.27.3
	I0706 20:48:15.048282    1540 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.27.3/kubectl
	I0706 20:48:15.049610    1540 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.27.3/kubeadm
	I0706 20:48:15.049610    1540 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.27.3/kubelet
	I0706 20:48:16.089793    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.27.3/kubectl -> /var/lib/minikube/binaries/v1.27.3/kubectl
	I0706 20:48:16.099527    1540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubectl
	I0706 20:48:16.105873    1540 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubectl': No such file or directory
	I0706 20:48:16.106087    1540 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubectl': No such file or directory
	I0706 20:48:16.106204    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.27.3/kubectl --> /var/lib/minikube/binaries/v1.27.3/kubectl (49258496 bytes)
	I0706 20:48:16.606850    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:48:16.628080    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.27.3/kubelet -> /var/lib/minikube/binaries/v1.27.3/kubelet
	I0706 20:48:16.638042    1540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubelet
	I0706 20:48:16.642534    1540 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubelet': No such file or directory
	I0706 20:48:16.643567    1540 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubelet': No such file or directory
	I0706 20:48:16.643601    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.27.3/kubelet --> /var/lib/minikube/binaries/v1.27.3/kubelet (106160128 bytes)
	I0706 20:48:21.160254    1540 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.27.3/kubeadm -> /var/lib/minikube/binaries/v1.27.3/kubeadm
	I0706 20:48:21.168579    1540 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubeadm
	I0706 20:48:21.174681    1540 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubeadm': No such file or directory
	I0706 20:48:21.174681    1540 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.27.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.27.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.27.3/kubeadm': No such file or directory
	I0706 20:48:21.174681    1540 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.27.3/kubeadm --> /var/lib/minikube/binaries/v1.27.3/kubeadm (48160768 bytes)
	I0706 20:48:21.508644    1540 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0706 20:48:21.522477    1540 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0706 20:48:21.547689    1540 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0706 20:48:21.583279    1540 ssh_runner.go:195] Run: grep 172.29.70.202	control-plane.minikube.internal$ /etc/hosts
	I0706 20:48:21.588315    1540 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.70.202	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 20:48:21.604560    1540 host.go:66] Checking if "multinode-144300" exists ...
	I0706 20:48:21.605207    1540 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:48:21.605207    1540 start.go:301] JoinCluster: &{Name:multinode-144300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.70.202 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.79.241 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:48:21.605396    1540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0706 20:48:21.605475    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:48:22.273753    1540 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:48:22.273753    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:22.273753    1540 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:48:23.268309    1540 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:48:23.268435    1540 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:48:23.269012    1540 sshutil.go:53] new ssh client: &{IP:172.29.70.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:48:23.457601    1540 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 17byv0.6eoymbshmk90d4l2 --discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d 
	I0706 20:48:23.457601    1540 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0": (1.8521917s)
	I0706 20:48:23.457601    1540 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.29.79.241 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0706 20:48:23.457601    1540 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 17byv0.6eoymbshmk90d4l2 --discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-144300-m02"
	I0706 20:48:23.517960    1540 command_runner.go:130] ! W0706 20:48:23.519460    1329 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0706 20:48:23.665380    1540 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0706 20:48:25.402295    1540 command_runner.go:130] > [preflight] Running pre-flight checks
	I0706 20:48:25.402295    1540 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0706 20:48:25.402295    1540 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0706 20:48:25.402295    1540 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0706 20:48:25.402295    1540 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0706 20:48:25.402295    1540 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0706 20:48:25.402295    1540 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0706 20:48:25.402295    1540 command_runner.go:130] > This node has joined the cluster:
	I0706 20:48:25.402295    1540 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0706 20:48:25.402295    1540 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0706 20:48:25.402295    1540 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0706 20:48:25.403292    1540 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 17byv0.6eoymbshmk90d4l2 --discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-144300-m02": (1.9456769s)
	I0706 20:48:25.403292    1540 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0706 20:48:25.576605    1540 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0706 20:48:25.731877    1540 start.go:303] JoinCluster complete in 4.1266403s
	I0706 20:48:25.731877    1540 cni.go:84] Creating CNI manager for ""
	I0706 20:48:25.731877    1540 cni.go:137] 2 nodes found, recommending kindnet
	I0706 20:48:25.741055    1540 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0706 20:48:25.748395    1540 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0706 20:48:25.748560    1540 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0706 20:48:25.748560    1540 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0706 20:48:25.748560    1540 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0706 20:48:25.748560    1540 command_runner.go:130] > Access: 2023-07-06 20:45:37.832339800 +0000
	I0706 20:48:25.748560    1540 command_runner.go:130] > Modify: 2023-06-30 22:28:30.000000000 +0000
	I0706 20:48:25.748560    1540 command_runner.go:130] > Change: 2023-07-06 20:45:29.423000000 +0000
	I0706 20:48:25.748560    1540 command_runner.go:130] >  Birth: -
	I0706 20:48:25.748689    1540 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0706 20:48:25.748689    1540 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0706 20:48:25.782650    1540 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0706 20:48:26.200596    1540 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0706 20:48:26.200596    1540 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0706 20:48:26.200596    1540 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0706 20:48:26.200596    1540 command_runner.go:130] > daemonset.apps/kindnet configured
	I0706 20:48:26.201836    1540 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:48:26.202721    1540 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.70.202:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 20:48:26.203190    1540 round_trippers.go:463] GET https://172.29.70.202:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0706 20:48:26.203190    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:26.203190    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:26.203791    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:26.206854    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:26.206854    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:26.206854    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:26.207700    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:26.207700    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:26.207700    1540 round_trippers.go:580]     Content-Length: 291
	I0706 20:48:26.207700    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:26 GMT
	I0706 20:48:26.207700    1540 round_trippers.go:580]     Audit-Id: 6c0f69fa-5849-4ac6-b4b4-b952439656c1
	I0706 20:48:26.207700    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:26.207700    1540 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b60ac185-d6d8-49e6-a9b3-f1d59eb7807f","resourceVersion":"452","creationTimestamp":"2023-07-06T20:46:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0706 20:48:26.207888    1540 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-144300" context rescaled to 1 replicas
	I0706 20:48:26.208006    1540 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.29.79.241 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0706 20:48:26.211553    1540 out.go:177] * Verifying Kubernetes components...
	I0706 20:48:26.224278    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:48:26.243055    1540 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:48:26.243055    1540 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.70.202:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 20:48:26.244058    1540 node_ready.go:35] waiting up to 6m0s for node "multinode-144300-m02" to be "Ready" ...
	I0706 20:48:26.244058    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:26.244058    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:26.244058    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:26.244058    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:26.248061    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:26.248284    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:26.248284    1540 round_trippers.go:580]     Audit-Id: 309ea2c8-8f13-4284-9d4e-dbcb73bea6a2
	I0706 20:48:26.248284    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:26.248284    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:26.248284    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:26.248284    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:26.248284    1540 round_trippers.go:580]     Content-Length: 3361
	I0706 20:48:26.248284    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:26 GMT
	I0706 20:48:26.248557    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"549","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 2337 chars]
	I0706 20:48:26.756178    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:26.756178    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:26.756178    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:26.756178    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:26.760070    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:26.760070    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:26.760070    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:26 GMT
	I0706 20:48:26.760070    1540 round_trippers.go:580]     Audit-Id: 5e5d67b6-5949-462e-a268-9a47ff5181a4
	I0706 20:48:26.760070    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:26.760070    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:26.760070    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:26.760070    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:26.760070    1540 round_trippers.go:580]     Content-Length: 3361
	I0706 20:48:26.760070    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"549","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 2337 chars]
	I0706 20:48:27.256904    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:27.256991    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:27.256991    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:27.256991    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:27.259607    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:48:27.260532    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:27.260532    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:27.260532    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:27.260588    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:27.260588    1540 round_trippers.go:580]     Content-Length: 3361
	I0706 20:48:27.260588    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:27 GMT
	I0706 20:48:27.260588    1540 round_trippers.go:580]     Audit-Id: 694c763d-7c7f-426c-8aa4-f6b80bce562a
	I0706 20:48:27.260624    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:27.260795    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"549","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 2337 chars]
	I0706 20:48:27.759675    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:27.759799    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:27.759799    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:27.759799    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:27.763208    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:27.764039    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:27.764210    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:27.764210    1540 round_trippers.go:580]     Content-Length: 3361
	I0706 20:48:27.764210    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:27 GMT
	I0706 20:48:27.764210    1540 round_trippers.go:580]     Audit-Id: b98bb333-fd81-4fb5-a26b-db30543d2a7c
	I0706 20:48:27.764328    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:27.764546    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:27.764546    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:27.764546    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"549","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 2337 chars]
	I0706 20:48:28.249371    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:28.249371    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:28.249371    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:28.249371    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:28.254196    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:28.254196    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:28.254196    1540 round_trippers.go:580]     Content-Length: 3361
	I0706 20:48:28.254196    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:28 GMT
	I0706 20:48:28.254196    1540 round_trippers.go:580]     Audit-Id: 99fe181a-ad86-46b0-8b2c-0c187c788644
	I0706 20:48:28.254196    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:28.254196    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:28.254196    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:28.254196    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:28.254196    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"549","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 2337 chars]
	I0706 20:48:28.254725    1540 node_ready.go:58] node "multinode-144300-m02" has status "Ready":"False"
	I0706 20:48:28.755845    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:28.755845    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:28.755845    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:28.755845    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:28.759447    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:28.759447    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:28.759447    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:28.759447    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:28.759447    1540 round_trippers.go:580]     Content-Length: 3361
	I0706 20:48:28.759447    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:28 GMT
	I0706 20:48:28.759447    1540 round_trippers.go:580]     Audit-Id: a67e4448-5b59-4167-9a70-cc28fad2bea0
	I0706 20:48:28.759447    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:28.759447    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:28.759447    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"549","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsT
ype":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:vo [truncated 2337 chars]
	I0706 20:48:29.255736    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:29.255736    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:29.255736    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:29.255736    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:29.259212    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:29.260008    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:29.260008    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:29.260008    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:29 GMT
	I0706 20:48:29.260008    1540 round_trippers.go:580]     Audit-Id: 08198e4b-ba68-46f4-8f19-dbe02b606526
	I0706 20:48:29.260008    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:29.260008    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:29.260008    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:29.260111    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:29.260188    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:29.761437    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:29.761509    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:29.761509    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:29.761509    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:29.765103    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:29.765367    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:29.765367    1540 round_trippers.go:580]     Audit-Id: 1657e6e1-7fbf-43c4-820f-0b2ddb0b6701
	I0706 20:48:29.765367    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:29.765367    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:29.765367    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:29.765462    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:29.765462    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:29.765462    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:29 GMT
	I0706 20:48:29.765657    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:30.260787    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:30.261045    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:30.261177    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:30.261177    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:30.265585    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:30.265585    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:30.265585    1540 round_trippers.go:580]     Audit-Id: eb769ee2-0e3f-407d-a827-58a4e8db06e4
	I0706 20:48:30.265585    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:30.265585    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:30.265585    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:30.265585    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:30.265585    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:30.265585    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:30 GMT
	I0706 20:48:30.265585    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:30.266212    1540 node_ready.go:58] node "multinode-144300-m02" has status "Ready":"False"
	I0706 20:48:30.762802    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:30.762802    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:30.762802    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:30.762802    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:30.766297    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:30.766297    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:30.766297    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:30 GMT
	I0706 20:48:30.766297    1540 round_trippers.go:580]     Audit-Id: 1dbbd9d2-e68c-4c6d-8423-68a579b767fe
	I0706 20:48:30.766395    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:30.766395    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:30.766395    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:30.766395    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:30.766395    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:30.766455    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:31.255341    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:31.255412    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:31.255412    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:31.255412    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:31.258385    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:48:31.258385    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:31.258385    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:31.258385    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:31.258385    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:31.258385    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:31 GMT
	I0706 20:48:31.258385    1540 round_trippers.go:580]     Audit-Id: 5cb5ebad-5b28-4076-a3ef-d476c5e74767
	I0706 20:48:31.259393    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:31.259451    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:31.259593    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:31.750344    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:31.750344    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:31.750344    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:31.750344    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:31.754072    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:31.754845    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:31.754845    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:31.754845    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:31.754845    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:31.754845    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:31.754845    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:31.754942    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:31 GMT
	I0706 20:48:31.754942    1540 round_trippers.go:580]     Audit-Id: 8a3eec7a-d454-4b11-a84c-73e04da346bf
	I0706 20:48:31.755004    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:32.257839    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:32.257839    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:32.257839    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:32.257839    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:32.262022    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:32.262225    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:32.262225    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:32 GMT
	I0706 20:48:32.262225    1540 round_trippers.go:580]     Audit-Id: 4edd286e-1543-4d29-811e-87ad93005e4a
	I0706 20:48:32.262328    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:32.262328    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:32.262430    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:32.262430    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:32.262430    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:32.262654    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:32.749168    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:32.749472    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:32.749472    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:32.749549    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:32.754346    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:32.754346    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:32.754346    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:32.754737    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:32.754737    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:32 GMT
	I0706 20:48:32.754737    1540 round_trippers.go:580]     Audit-Id: a23494d6-f9fd-4421-9131-666ebd4bf102
	I0706 20:48:32.754737    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:32.754737    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:32.754737    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:32.754997    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:32.755298    1540 node_ready.go:58] node "multinode-144300-m02" has status "Ready":"False"
	I0706 20:48:33.256510    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:33.256574    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:33.256603    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:33.256603    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:33.263539    1540 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:48:33.263539    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:33.263539    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:33.263539    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:33.263539    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:33.263539    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:33.263539    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:33.263539    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:33 GMT
	I0706 20:48:33.263539    1540 round_trippers.go:580]     Audit-Id: ce4886ff-3b12-4378-bb26-d4c4c5a7634a
	I0706 20:48:33.263539    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:33.762294    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:33.762294    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:33.762294    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:33.762294    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:33.766429    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:33.766921    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:33.766921    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:33.766961    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:33 GMT
	I0706 20:48:33.766961    1540 round_trippers.go:580]     Audit-Id: fb180f79-7d91-4cb9-946e-31e7ef5721e4
	I0706 20:48:33.766961    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:33.766961    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:33.766961    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:33.767044    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:33.767237    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:34.254412    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:34.254412    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:34.254731    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:34.254731    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:34.258627    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:34.258627    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:34.258717    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:34.258717    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:34.258717    1540 round_trippers.go:580]     Content-Length: 3470
	I0706 20:48:34.258717    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:34 GMT
	I0706 20:48:34.258717    1540 round_trippers.go:580]     Audit-Id: 5e78e586-097d-420f-894d-2d888cf6de7d
	I0706 20:48:34.258717    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:34.258717    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:34.258785    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"554","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2446 chars]
	I0706 20:48:34.756704    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:34.756772    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:34.756772    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:34.756772    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:34.931641    1540 round_trippers.go:574] Response Status: 200 OK in 174 milliseconds
	I0706 20:48:34.932107    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:34.932107    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:34.932107    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:34.932107    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:34.932254    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:34.932254    1540 round_trippers.go:580]     Content-Length: 3862
	I0706 20:48:34.932301    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:34 GMT
	I0706 20:48:34.932301    1540 round_trippers.go:580]     Audit-Id: 4fc97aa9-a92f-4746-99c4-2f1b8042ec62
	I0706 20:48:34.932401    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"561","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2838 chars]
	I0706 20:48:34.932401    1540 node_ready.go:58] node "multinode-144300-m02" has status "Ready":"False"
	I0706 20:48:35.262933    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:35.262933    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:35.262933    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:35.263022    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:35.266047    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:35.266047    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:35.266131    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:35.266131    1540 round_trippers.go:580]     Content-Length: 3862
	I0706 20:48:35.266131    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:35 GMT
	I0706 20:48:35.266131    1540 round_trippers.go:580]     Audit-Id: 5f90d8be-fdbd-47b2-abc0-1b29164fb98d
	I0706 20:48:35.266131    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:35.266218    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:35.266218    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:35.266291    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"561","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2838 chars]
	I0706 20:48:35.754559    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:35.754559    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:35.754559    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:35.754559    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:35.884586    1540 round_trippers.go:574] Response Status: 200 OK in 130 milliseconds
	I0706 20:48:35.884586    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:35.884586    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:35.884586    1540 round_trippers.go:580]     Content-Length: 3862
	I0706 20:48:35.884586    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:35 GMT
	I0706 20:48:35.884586    1540 round_trippers.go:580]     Audit-Id: 72d44ef3-7f4e-46fc-b344-a959fcd8fcd9
	I0706 20:48:35.884586    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:35.884586    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:35.884586    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:35.884586    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"561","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2838 chars]
	I0706 20:48:36.263891    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:36.263891    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:36.263891    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:36.263891    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:36.268403    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:36.268403    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:36.268479    1540 round_trippers.go:580]     Audit-Id: 373e58e6-1ec6-4a31-91a1-7db736e7ba6b
	I0706 20:48:36.268479    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:36.268479    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:36.268479    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:36.268479    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:36.268479    1540 round_trippers.go:580]     Content-Length: 3862
	I0706 20:48:36.268479    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:36 GMT
	I0706 20:48:36.268681    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"561","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2838 chars]
	I0706 20:48:36.756585    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:36.756672    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:36.756672    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:36.756672    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:36.760280    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:36.760280    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:36.760280    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:36.760280    1540 round_trippers.go:580]     Content-Length: 3862
	I0706 20:48:36.760280    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:36 GMT
	I0706 20:48:36.760280    1540 round_trippers.go:580]     Audit-Id: 279e0847-3202-4576-a392-fa8c81f90ac2
	I0706 20:48:36.760280    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:36.760280    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:36.760280    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:36.761273    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"561","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2838 chars]
	I0706 20:48:37.262138    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:37.262441    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:37.262441    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:37.262527    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:37.276813    1540 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0706 20:48:37.277315    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:37.277315    1540 round_trippers.go:580]     Audit-Id: 80f1fe55-099f-474c-8ee8-2a38315fcadd
	I0706 20:48:37.277315    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:37.277315    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:37.277315    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:37.277315    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:37.277428    1540 round_trippers.go:580]     Content-Length: 3862
	I0706 20:48:37.277428    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:37 GMT
	I0706 20:48:37.277529    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"561","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2838 chars]
	I0706 20:48:37.277891    1540 node_ready.go:58] node "multinode-144300-m02" has status "Ready":"False"
	I0706 20:48:37.750925    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:37.751023    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:37.751023    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:37.751023    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:37.755582    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:37.755664    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:37.755664    1540 round_trippers.go:580]     Audit-Id: 964d7a7a-6493-4ff9-8698-e78c0a3fae2d
	I0706 20:48:37.755664    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:37.755664    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:37.755664    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:37.755664    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:37.755664    1540 round_trippers.go:580]     Content-Length: 3862
	I0706 20:48:37.755664    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:37 GMT
	I0706 20:48:37.755664    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"561","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2838 chars]
	I0706 20:48:38.258877    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:38.258877    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:38.258980    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:38.258980    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:38.262107    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:38.262107    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:38.262107    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:38.262107    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:38.262107    1540 round_trippers.go:580]     Content-Length: 3862
	I0706 20:48:38.262107    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:38 GMT
	I0706 20:48:38.262107    1540 round_trippers.go:580]     Audit-Id: 8fc92b1f-f4a6-47f8-afca-a470f80ab688
	I0706 20:48:38.262107    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:38.262107    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:38.262107    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"561","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2838 chars]
	I0706 20:48:38.751722    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:38.751722    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:38.751722    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:38.751722    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:38.755627    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:38.755698    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:38.755698    1540 round_trippers.go:580]     Audit-Id: 0decf071-799f-4c20-87f8-90404b3726b5
	I0706 20:48:38.755698    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:38.755698    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:38.755698    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:38.755698    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:38.755698    1540 round_trippers.go:580]     Content-Length: 3862
	I0706 20:48:38.755698    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:38 GMT
	I0706 20:48:38.755698    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"561","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2838 chars]
	I0706 20:48:39.259915    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:39.259999    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.260083    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.260083    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.266925    1540 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:48:39.267024    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.267024    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.267024    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.267024    1540 round_trippers.go:580]     Content-Length: 3862
	I0706 20:48:39.267024    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.267024    1540 round_trippers.go:580]     Audit-Id: bde33182-909e-429c-aab5-6699b18d96e3
	I0706 20:48:39.267024    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.267024    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.267024    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"561","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2838 chars]
	I0706 20:48:39.750192    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:39.750192    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.750192    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.750192    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.754595    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:39.754595    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.754595    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.754595    1540 round_trippers.go:580]     Audit-Id: 2ac1ac3a-d4eb-45d4-ba70-5f0dc0a45053
	I0706 20:48:39.754595    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.754595    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.754595    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.754595    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.754595    1540 round_trippers.go:580]     Content-Length: 3728
	I0706 20:48:39.754595    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"580","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2704 chars]
	I0706 20:48:39.755194    1540 node_ready.go:49] node "multinode-144300-m02" has status "Ready":"True"
	I0706 20:48:39.755194    1540 node_ready.go:38] duration metric: took 13.5110356s waiting for node "multinode-144300-m02" to be "Ready" ...
	I0706 20:48:39.755194    1540 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 20:48:39.755194    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods
	I0706 20:48:39.755194    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.755194    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.755194    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.760822    1540 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:48:39.760822    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.760822    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.760822    1540 round_trippers.go:580]     Audit-Id: 2b7b9d02-3493-469a-a481-8e62556fbbc8
	I0706 20:48:39.760822    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.760822    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.760822    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.760822    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.761587    1540 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"580"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"448","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67476 chars]
	I0706 20:48:39.764252    1540 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:39.764252    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:48:39.764252    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.764788    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.764788    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.767515    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:48:39.768035    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.768079    1540 round_trippers.go:580]     Audit-Id: 1384fa8d-1920-4e0f-8a97-7cf15d2658d7
	I0706 20:48:39.768079    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.768079    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.768079    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.768079    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.768079    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.768375    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"448","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6283 chars]
	I0706 20:48:39.768983    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:48:39.768983    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.768983    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.768983    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.772154    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:48:39.772154    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.772154    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.772154    1540 round_trippers.go:580]     Audit-Id: 23ab6f02-96b4-455f-9c23-395155ed6133
	I0706 20:48:39.772154    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.772154    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.772154    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.772154    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.772154    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"456","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0706 20:48:39.772776    1540 pod_ready.go:92] pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace has status "Ready":"True"
	I0706 20:48:39.772776    1540 pod_ready.go:81] duration metric: took 8.5245ms waiting for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:39.772776    1540 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:39.772776    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-144300
	I0706 20:48:39.772776    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.772776    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.772776    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.776013    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:39.776013    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.776013    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.776013    1540 round_trippers.go:580]     Audit-Id: 9afc2fad-6d69-4cd7-ac86-ed79f5a48802
	I0706 20:48:39.776013    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.776013    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.776013    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.776013    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.776013    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-144300","namespace":"kube-system","uid":"368f429f-74ac-49a7-9c8f-89f95c37d31d","resourceVersion":"419","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.70.202:2379","kubernetes.io/config.hash":"e0089eceedc87039bc11bd2d8713b69e","kubernetes.io/config.mirror":"e0089eceedc87039bc11bd2d8713b69e","kubernetes.io/config.seen":"2023-07-06T20:46:36.035688887Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5862 chars]
	I0706 20:48:39.776622    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:48:39.776622    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.776622    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.776622    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.779526    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:48:39.779526    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.779526    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.779526    1540 round_trippers.go:580]     Audit-Id: 58b4f440-2b06-43b4-b91f-35cc3f13395a
	I0706 20:48:39.779526    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.779526    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.779526    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.779526    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.779526    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"456","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0706 20:48:39.780141    1540 pod_ready.go:92] pod "etcd-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:48:39.780141    1540 pod_ready.go:81] duration metric: took 7.3648ms waiting for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:39.780141    1540 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:39.780141    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-144300
	I0706 20:48:39.780141    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.780141    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.780141    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.783493    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:39.783493    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.783493    1540 round_trippers.go:580]     Audit-Id: 9a1be868-db74-4914-be1d-c2ea7b4fc26a
	I0706 20:48:39.783493    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.783493    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.783493    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.783493    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.783493    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.783493    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-144300","namespace":"kube-system","uid":"a8848557-ed29-484b-9365-b07c4da9051f","resourceVersion":"423","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.70.202:8443","kubernetes.io/config.hash":"cde174f192a25fd146cf674bbcb8ed25","kubernetes.io/config.mirror":"cde174f192a25fd146cf674bbcb8ed25","kubernetes.io/config.seen":"2023-07-06T20:46:36.035683287Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7399 chars]
	I0706 20:48:39.788910    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:48:39.788910    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.788910    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.788910    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.792394    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:39.792394    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.792394    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.792394    1540 round_trippers.go:580]     Audit-Id: 3263af81-80f2-4ac8-ab4c-b35bfe0f8bdd
	I0706 20:48:39.792394    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.792394    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.792394    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.792394    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.792394    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"456","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0706 20:48:39.793015    1540 pod_ready.go:92] pod "kube-apiserver-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:48:39.793015    1540 pod_ready.go:81] duration metric: took 12.8741ms waiting for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:39.793015    1540 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:39.793015    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-144300
	I0706 20:48:39.793015    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.793015    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.793015    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.796590    1540 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:48:39.796638    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.796638    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.796638    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.796638    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.796638    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.796638    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.796730    1540 round_trippers.go:580]     Audit-Id: 226d1036-1e10-4e4a-aaa4-d2243b3cd43b
	I0706 20:48:39.796730    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-144300","namespace":"kube-system","uid":"d9a60269-68e9-4ea2-82fe-63cedee225ef","resourceVersion":"420","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.mirror":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.seen":"2023-07-06T20:46:36.035686687Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6969 chars]
	I0706 20:48:39.797347    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:48:39.797347    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.797347    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.797347    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.807482    1540 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0706 20:48:39.807482    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.807482    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.807482    1540 round_trippers.go:580]     Audit-Id: 176b3148-b6ed-4ac2-bcf4-345b5934d043
	I0706 20:48:39.807482    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.807482    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.807482    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.807482    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.808323    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"456","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0706 20:48:39.808323    1540 pod_ready.go:92] pod "kube-controller-manager-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:48:39.808323    1540 pod_ready.go:81] duration metric: took 15.3081ms waiting for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:39.808323    1540 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5vmt" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:39.955795    1540 request.go:628] Waited for 146.7099ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5vmt
	I0706 20:48:39.956080    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5vmt
	I0706 20:48:39.956080    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:39.956080    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:39.956152    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:39.959916    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:39.960878    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:39.960878    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:39.960925    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:39.960925    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:39 GMT
	I0706 20:48:39.960925    1540 round_trippers.go:580]     Audit-Id: a036c5d3-c491-49b1-8ee1-9bbe5f845ecd
	I0706 20:48:39.960925    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:39.960925    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:39.961029    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f5vmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"e615de7b-b4a0-4060-aecd-0581b032227d","resourceVersion":"567","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0706 20:48:40.163386    1540 request.go:628] Waited for 201.1904ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:40.163510    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:48:40.163510    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:40.163510    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:40.163632    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:40.166704    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:40.167655    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:40.167655    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:40.167715    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:40.167715    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:40.167715    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:40.167715    1540 round_trippers.go:580]     Content-Length: 3728
	I0706 20:48:40.167715    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:40 GMT
	I0706 20:48:40.167715    1540 round_trippers.go:580]     Audit-Id: d9b51adf-2ca2-4925-9251-3fc41efa1beb
	I0706 20:48:40.167873    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"580","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 2704 chars]
	I0706 20:48:40.168095    1540 pod_ready.go:92] pod "kube-proxy-f5vmt" in "kube-system" namespace has status "Ready":"True"
	I0706 20:48:40.168095    1540 pod_ready.go:81] duration metric: took 359.7693ms waiting for pod "kube-proxy-f5vmt" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:40.168095    1540 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:40.354053    1540 request.go:628] Waited for 185.8466ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 20:48:40.354304    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 20:48:40.354388    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:40.354388    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:40.354388    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:40.362198    1540 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0706 20:48:40.362198    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:40.362198    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:40.362198    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:40.362198    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:40 GMT
	I0706 20:48:40.362198    1540 round_trippers.go:580]     Audit-Id: e81212b6-bee0-40dc-af27-8c7445945035
	I0706 20:48:40.362740    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:40.362740    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:40.363012    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h6h62","generateName":"kube-proxy-","namespace":"kube-system","uid":"6949ff1e-f5c0-4ab2-ae7f-6b30775e220d","resourceVersion":"416","creationTimestamp":"2023-07-06T20:46:49Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5533 chars]
	I0706 20:48:40.563028    1540 request.go:628] Waited for 198.7868ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:48:40.563259    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:48:40.563259    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:40.563259    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:40.563259    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:40.567663    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:40.567663    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:40.567663    1540 round_trippers.go:580]     Audit-Id: 62145501-aec6-46f5-8a5e-5f2a9c695656
	I0706 20:48:40.567663    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:40.567663    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:40.567663    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:40.567663    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:40.567663    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:40 GMT
	I0706 20:48:40.568307    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"456","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0706 20:48:40.568419    1540 pod_ready.go:92] pod "kube-proxy-h6h62" in "kube-system" namespace has status "Ready":"True"
	I0706 20:48:40.568419    1540 pod_ready.go:81] duration metric: took 400.3209ms waiting for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:40.568419    1540 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:40.752434    1540 request.go:628] Waited for 183.8185ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 20:48:40.752507    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 20:48:40.752507    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:40.752507    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:40.752507    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:40.757234    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:40.757234    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:40.757234    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:40 GMT
	I0706 20:48:40.757234    1540 round_trippers.go:580]     Audit-Id: 580a85a6-7991-468a-a4b8-1196e0a240a2
	I0706 20:48:40.757234    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:40.757234    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:40.757234    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:40.757234    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:40.757717    1540 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-144300","namespace":"kube-system","uid":"70e904dd-fca0-436e-84d9-101fbc1cd9b0","resourceVersion":"421","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.mirror":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.seen":"2023-07-06T20:46:36.035687887Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4699 chars]
	I0706 20:48:40.958181    1540 request.go:628] Waited for 199.6311ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:48:40.958475    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes/multinode-144300
	I0706 20:48:40.958475    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:40.958705    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:40.958705    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:40.961995    1540 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:48:40.962081    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:40.962081    1540 round_trippers.go:580]     Audit-Id: 8b5c318b-2002-4d29-8e1c-c30df1775866
	I0706 20:48:40.962081    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:40.962081    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:40.962081    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:40.962081    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:40.962081    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:40 GMT
	I0706 20:48:40.962251    1540 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"456","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0706 20:48:40.962768    1540 pod_ready.go:92] pod "kube-scheduler-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:48:40.962868    1540 pod_ready.go:81] duration metric: took 394.4456ms waiting for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:48:40.962868    1540 pod_ready.go:38] duration metric: took 1.2076653s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 20:48:40.962868    1540 system_svc.go:44] waiting for kubelet service to be running ....
	I0706 20:48:40.972180    1540 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:48:40.992497    1540 system_svc.go:56] duration metric: took 29.6288ms WaitForService to wait for kubelet.
	I0706 20:48:40.992580    1540 kubeadm.go:581] duration metric: took 14.7843862s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0706 20:48:40.992640    1540 node_conditions.go:102] verifying NodePressure condition ...
	I0706 20:48:41.159688    1540 request.go:628] Waited for 166.7691ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.70.202:8443/api/v1/nodes
	I0706 20:48:41.159885    1540 round_trippers.go:463] GET https://172.29.70.202:8443/api/v1/nodes
	I0706 20:48:41.159885    1540 round_trippers.go:469] Request Headers:
	I0706 20:48:41.159977    1540 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:48:41.159977    1540 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:48:41.164637    1540 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:48:41.164863    1540 round_trippers.go:577] Response Headers:
	I0706 20:48:41.164863    1540 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:48:41.164863    1540 round_trippers.go:580]     Content-Type: application/json
	I0706 20:48:41.164863    1540 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:48:41.164863    1540 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:48:41.164863    1540 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:48:41 GMT
	I0706 20:48:41.164863    1540 round_trippers.go:580]     Audit-Id: 444a11bc-328b-4de9-8393-6509f2a9aa5b
	I0706 20:48:41.165039    1540 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"581"},"items":[{"metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"456","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 8707 chars]
	I0706 20:48:41.166194    1540 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:48:41.166311    1540 node_conditions.go:123] node cpu capacity is 2
	I0706 20:48:41.166311    1540 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:48:41.166311    1540 node_conditions.go:123] node cpu capacity is 2
	I0706 20:48:41.166311    1540 node_conditions.go:105] duration metric: took 173.6693ms to run NodePressure ...
	I0706 20:48:41.166311    1540 start.go:228] waiting for startup goroutines ...
	I0706 20:48:41.166433    1540 start.go:242] writing updated cluster config ...
	I0706 20:48:41.177275    1540 ssh_runner.go:195] Run: rm -f paused
	I0706 20:48:41.361708    1540 start.go:642] kubectl: 1.18.2, cluster: 1.27.3 (minor skew: 9)
	I0706 20:48:41.367293    1540 out.go:177] 
	W0706 20:48:41.369800    1540 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.3.
	I0706 20:48:41.372009    1540 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0706 20:48:41.378624    1540 out.go:177] * Done! kubectl is now configured to use "multinode-144300" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-07-06 20:45:31 UTC, ends at Thu 2023-07-06 20:49:29 UTC. --
	Jul 06 20:47:02 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:02.503013334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:47:02 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:02.510498792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 20:47:02 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:02.510831404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:47:02 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:02.510952108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 20:47:02 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:02.511049311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:47:03 multinode-144300 cri-dockerd[1195]: time="2023-07-06T20:47:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/791a2e3d6abe6e60c374bd60fbec95df6306e7744ebd4671b0d4a568a7a3a146/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 20:47:03 multinode-144300 cri-dockerd[1195]: time="2023-07-06T20:47:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4ae69054a21be0224464bfc9174b0c4e86fc1c3db2b23e0d17aa253115389088/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 20:47:03 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:03.264682490Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 20:47:03 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:03.264815195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:47:03 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:03.264831195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 20:47:03 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:03.264843796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:47:03 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:03.285046075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 20:47:03 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:03.285336884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:47:03 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:03.285411987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 20:47:03 multinode-144300 dockerd[1304]: time="2023-07-06T20:47:03.285430688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:48:51 multinode-144300 dockerd[1304]: time="2023-07-06T20:48:51.509898800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 20:48:51 multinode-144300 dockerd[1304]: time="2023-07-06T20:48:51.510092201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:48:51 multinode-144300 dockerd[1304]: time="2023-07-06T20:48:51.510136801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 20:48:51 multinode-144300 dockerd[1304]: time="2023-07-06T20:48:51.510163401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:48:52 multinode-144300 cri-dockerd[1195]: time="2023-07-06T20:48:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/273f0a120cd486cbd542d72ae7ca9c650e11a955f6e4170a04123351b2207531/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 06 20:48:53 multinode-144300 cri-dockerd[1195]: time="2023-07-06T20:48:53Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jul 06 20:48:53 multinode-144300 dockerd[1304]: time="2023-07-06T20:48:53.235091316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 20:48:53 multinode-144300 dockerd[1304]: time="2023-07-06T20:48:53.235171516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:48:53 multinode-144300 dockerd[1304]: time="2023-07-06T20:48:53.235188516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 20:48:53 multinode-144300 dockerd[1304]: time="2023-07-06T20:48:53.235198416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	0ec910823d675       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   36 seconds ago      Running             busybox                   0                   273f0a120cd48
	7d425ac2e145f       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       0                   4ae69054a21be
	d9e48f8643f47       ead0a4a53df89                                                                                         2 minutes ago       Running             coredns                   0                   791a2e3d6abe6
	2ec34877e4acd       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974              2 minutes ago       Running             kindnet-cni               0                   c1aec25071ed9
	b92d8760a51ff       5780543258cf0                                                                                         2 minutes ago       Running             kube-proxy                0                   eec796df46dbf
	775dc0b6d0dcc       41697ceeb70b3                                                                                         3 minutes ago       Running             kube-scheduler            0                   04380a3faf912
	f7157ce4715f9       86b6af7dd652c                                                                                         3 minutes ago       Running             etcd                      0                   9bc6a4a0228f3
	67b35d14730ac       08a0c939e61b7                                                                                         3 minutes ago       Running             kube-apiserver            0                   9fe4ead7e3fa6
	9deab8b718f35       7cffc01dba0e1                                                                                         3 minutes ago       Running             kube-controller-manager   0                   f4d2e1b10e79b
	
	* 
	* ==> coredns [d9e48f8643f4] <==
	* [INFO] 10.244.0.3:54816 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000912s
	[INFO] 10.244.1.2:58151 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171401s
	[INFO] 10.244.1.2:55798 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000664s
	[INFO] 10.244.1.2:38081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000875s
	[INFO] 10.244.1.2:36525 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000699s
	[INFO] 10.244.1.2:44463 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000528s
	[INFO] 10.244.1.2:51138 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000591s
	[INFO] 10.244.1.2:54618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081101s
	[INFO] 10.244.1.2:55676 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000454s
	[INFO] 10.244.0.3:38721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163001s
	[INFO] 10.244.0.3:42041 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108801s
	[INFO] 10.244.0.3:45947 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160001s
	[INFO] 10.244.0.3:58157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084901s
	[INFO] 10.244.1.2:34962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129801s
	[INFO] 10.244.1.2:53801 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182602s
	[INFO] 10.244.1.2:52790 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122801s
	[INFO] 10.244.1.2:57732 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092401s
	[INFO] 10.244.0.3:36006 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116401s
	[INFO] 10.244.0.3:44100 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105901s
	[INFO] 10.244.0.3:50791 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091301s
	[INFO] 10.244.0.3:49929 - 5 "PTR IN 1.64.29.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000064601s
	[INFO] 10.244.1.2:38982 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000951s
	[INFO] 10.244.1.2:50028 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162502s
	[INFO] 10.244.1.2:38044 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000673s
	[INFO] 10.244.1.2:35547 - 5 "PTR IN 1.64.29.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000090801s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-144300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-144300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d384f293eb4d1ae13e8a16440afa4ec48ef3148
	                    minikube.k8s.io/name=multinode-144300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_06T20_46_37_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 06 Jul 2023 20:46:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-144300
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 06 Jul 2023 20:49:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 06 Jul 2023 20:49:10 +0000   Thu, 06 Jul 2023 20:46:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 06 Jul 2023 20:49:10 +0000   Thu, 06 Jul 2023 20:46:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 06 Jul 2023 20:49:10 +0000   Thu, 06 Jul 2023 20:46:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 06 Jul 2023 20:49:10 +0000   Thu, 06 Jul 2023 20:47:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.70.202
	  Hostname:    multinode-144300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 60b5639272024b198a4bc8715611a0cd
	  System UUID:                f2b24827-fd9a-be40-b7bb-ed0eca8a4e3a
	  Boot ID:                    f14857e6-54ab-4d97-973f-7634d0dfaf3c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-47tnt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 coredns-5d78c9869d-m7j99                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m41s
	  kube-system                 etcd-multinode-144300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         2m53s
	  kube-system                 kindnet-9pjnm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m40s
	  kube-system                 kube-apiserver-multinode-144300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 kube-controller-manager-multinode-144300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 kube-proxy-h6h62                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m40s
	  kube-system                 kube-scheduler-multinode-144300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m38s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m3s (x8 over 3m3s)  kubelet          Node multinode-144300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m3s (x8 over 3m3s)  kubelet          Node multinode-144300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m3s (x7 over 3m3s)  kubelet          Node multinode-144300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m53s                kubelet          Node multinode-144300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s                kubelet          Node multinode-144300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s                kubelet          Node multinode-144300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m53s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m41s                node-controller  Node multinode-144300 event: Registered Node multinode-144300 in Controller
	  Normal  NodeReady                2m28s                kubelet          Node multinode-144300 status is now: NodeReady
	
	
	Name:               multinode-144300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-144300-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 06 Jul 2023 20:48:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-144300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 06 Jul 2023 20:49:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 06 Jul 2023 20:48:55 +0000   Thu, 06 Jul 2023 20:48:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 06 Jul 2023 20:48:55 +0000   Thu, 06 Jul 2023 20:48:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 06 Jul 2023 20:48:55 +0000   Thu, 06 Jul 2023 20:48:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 06 Jul 2023 20:48:55 +0000   Thu, 06 Jul 2023 20:48:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.79.241
	  Hostname:    multinode-144300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 9adf5d44e3664ebfba2d440ac124b54c
	  System UUID:                d86403da-f9b6-a346-9afe-e8d51877b934
	  Boot ID:                    7c697567-5e6b-495e-882a-c31d63eecf8c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-qp6pw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  kube-system                 kindnet-z6sjf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      65s
	  kube-system                 kube-proxy-f5vmt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 54s                kube-proxy       
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x2 over 65s)  kubelet          Node multinode-144300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x2 over 65s)  kubelet          Node multinode-144300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x2 over 65s)  kubelet          Node multinode-144300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           61s                node-controller  Node multinode-144300-m02 event: Registered Node multinode-144300-m02 in Controller
	  Normal  NodeReady                50s                kubelet          Node multinode-144300-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +1.234400] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +0.961285] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.033069] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.445798] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000052] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +14.285298] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.138066] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[Jul 6 20:46] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[  +0.497954] systemd-fstab-generator[958]: Ignoring "noauto" for root device
	[  +0.139793] systemd-fstab-generator[969]: Ignoring "noauto" for root device
	[  +0.168167] systemd-fstab-generator[982]: Ignoring "noauto" for root device
	[  +1.292592] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.333668] systemd-fstab-generator[1140]: Ignoring "noauto" for root device
	[  +0.144220] systemd-fstab-generator[1151]: Ignoring "noauto" for root device
	[  +0.156687] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.137387] systemd-fstab-generator[1173]: Ignoring "noauto" for root device
	[  +0.168789] systemd-fstab-generator[1187]: Ignoring "noauto" for root device
	[ +12.011323] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[  +2.129300] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.015898] systemd-fstab-generator[1612]: Ignoring "noauto" for root device
	[  +0.682172] kauditd_printk_skb: 29 callbacks suppressed
	[  +9.191679] systemd-fstab-generator[2548]: Ignoring "noauto" for root device
	[ +22.347604] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [f7157ce4715f] <==
	* {"level":"info","ts":"2023-07-06T20:46:30.426Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-06T20:46:30.427Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T20:46:30.428Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.29.70.202:2379"}
	{"level":"info","ts":"2023-07-06T20:46:30.428Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ae3d7b74d33a3bd5","local-member-id":"20534944f3f72b4","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T20:46:30.428Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T20:46:30.428Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T20:46:30.432Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-06T20:46:30.432Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-06T20:47:11.807Z","caller":"traceutil/trace.go:171","msg":"trace[27754384] transaction","detail":"{read_only:false; response_revision:459; number_of_response:1; }","duration":"153.130571ms","start":"2023-07-06T20:47:11.654Z","end":"2023-07-06T20:47:11.807Z","steps":["trace[27754384] 'process raft request'  (duration: 152.984067ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-06T20:48:17.876Z","caller":"traceutil/trace.go:171","msg":"trace[236542080] transaction","detail":"{read_only:false; response_revision:512; number_of_response:1; }","duration":"240.913906ms","start":"2023-07-06T20:48:17.635Z","end":"2023-07-06T20:48:17.876Z","steps":["trace[236542080] 'process raft request'  (duration: 240.788705ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-06T20:48:22.813Z","caller":"traceutil/trace.go:171","msg":"trace[206626031] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"279.368198ms","start":"2023-07-06T20:48:22.533Z","end":"2023-07-06T20:48:22.813Z","steps":["trace[206626031] 'process raft request'  (duration: 279.100996ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-06T20:48:23.068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.422261ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-06T20:48:23.068Z","caller":"traceutil/trace.go:171","msg":"trace[407725553] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:515; }","duration":"168.653863ms","start":"2023-07-06T20:48:22.899Z","end":"2023-07-06T20:48:23.068Z","steps":["trace[407725553] 'range keys from in-memory index tree'  (duration: 168.34316ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-06T20:48:23.068Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.43838ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-06T20:48:23.068Z","caller":"traceutil/trace.go:171","msg":"trace[965088613] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:515; }","duration":"104.926283ms","start":"2023-07-06T20:48:22.963Z","end":"2023-07-06T20:48:23.068Z","steps":["trace[965088613] 'range keys from in-memory index tree'  (duration: 104.284579ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-06T20:48:34.918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"151.427027ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8265382042310759959 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.29.70.202\" mod_revision:525 > success:<request_put:<key:\"/registry/masterleases/172.29.70.202\" value_size:66 lease:8265382042310759957 >> failure:<request_range:<key:\"/registry/masterleases/172.29.70.202\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-07-06T20:48:34.918Z","caller":"traceutil/trace.go:171","msg":"trace[2027785705] transaction","detail":"{read_only:false; response_revision:560; number_of_response:1; }","duration":"291.425176ms","start":"2023-07-06T20:48:34.627Z","end":"2023-07-06T20:48:34.918Z","steps":["trace[2027785705] 'process raft request'  (duration: 138.959442ms)","trace[2027785705] 'compare'  (duration: 151.351526ms)"],"step_count":2}
	{"level":"info","ts":"2023-07-06T20:48:34.919Z","caller":"traceutil/trace.go:171","msg":"trace[859904011] linearizableReadLoop","detail":"{readStateIndex:594; appliedIndex:593; }","duration":"222.796911ms","start":"2023-07-06T20:48:34.696Z","end":"2023-07-06T20:48:34.918Z","steps":["trace[859904011] 'read index received'  (duration: 70.347977ms)","trace[859904011] 'applied index is now lower than readState.Index'  (duration: 152.448034ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-06T20:48:34.919Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.417315ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/multinode-144300-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-06T20:48:34.919Z","caller":"traceutil/trace.go:171","msg":"trace[1549460741] range","detail":"{range_begin:/registry/leases/kube-node-lease/multinode-144300-m02; range_end:; response_count:0; response_revision:560; }","duration":"223.675616ms","start":"2023-07-06T20:48:34.696Z","end":"2023-07-06T20:48:34.919Z","steps":["trace[1549460741] 'agreement among raft nodes before linearized reading'  (duration: 223.190413ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-06T20:48:34.930Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"167.228833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-144300-m02\" ","response":"range_response_count:1 size:2663"}
	{"level":"info","ts":"2023-07-06T20:48:34.930Z","caller":"traceutil/trace.go:171","msg":"trace[914698629] range","detail":"{range_begin:/registry/minions/multinode-144300-m02; range_end:; response_count:1; response_revision:561; }","duration":"167.282234ms","start":"2023-07-06T20:48:34.763Z","end":"2023-07-06T20:48:34.930Z","steps":["trace[914698629] 'agreement among raft nodes before linearized reading'  (duration: 167.194833ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-06T20:48:37.279Z","caller":"traceutil/trace.go:171","msg":"trace[671624632] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"161.578273ms","start":"2023-07-06T20:48:37.117Z","end":"2023-07-06T20:48:37.279Z","steps":["trace[671624632] 'process raft request'  (duration: 161.315071ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-06T20:48:37.563Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"223.993787ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/volumeattachments/\" range_end:\"/registry/volumeattachments0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-06T20:48:37.563Z","caller":"traceutil/trace.go:171","msg":"trace[683609824] range","detail":"{range_begin:/registry/volumeattachments/; range_end:/registry/volumeattachments0; response_count:0; response_revision:571; }","duration":"224.165988ms","start":"2023-07-06T20:48:37.338Z","end":"2023-07-06T20:48:37.563Z","steps":["trace[683609824] 'count revisions from in-memory index tree'  (duration: 223.855585ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:49:29 up 4 min,  0 users,  load average: 0.42, 0.46, 0.21
	Linux multinode-144300 5.10.57 #1 SMP Fri Jun 30 21:41:53 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [2ec34877e4ac] <==
	* I0706 20:48:28.134042       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.79.241 Flags: [] Table: 0} 
	I0706 20:48:38.141479       1 main.go:223] Handling node with IPs: map[172.29.70.202:{}]
	I0706 20:48:38.141537       1 main.go:227] handling current node
	I0706 20:48:38.141549       1 main.go:223] Handling node with IPs: map[172.29.79.241:{}]
	I0706 20:48:38.141555       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 20:48:48.150554       1 main.go:223] Handling node with IPs: map[172.29.70.202:{}]
	I0706 20:48:48.150671       1 main.go:227] handling current node
	I0706 20:48:48.150685       1 main.go:223] Handling node with IPs: map[172.29.79.241:{}]
	I0706 20:48:48.150709       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 20:48:58.179392       1 main.go:223] Handling node with IPs: map[172.29.70.202:{}]
	I0706 20:48:58.179482       1 main.go:227] handling current node
	I0706 20:48:58.179495       1 main.go:223] Handling node with IPs: map[172.29.79.241:{}]
	I0706 20:48:58.179502       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 20:49:08.186184       1 main.go:223] Handling node with IPs: map[172.29.70.202:{}]
	I0706 20:49:08.186303       1 main.go:227] handling current node
	I0706 20:49:08.186318       1 main.go:223] Handling node with IPs: map[172.29.79.241:{}]
	I0706 20:49:08.186326       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 20:49:18.199281       1 main.go:223] Handling node with IPs: map[172.29.70.202:{}]
	I0706 20:49:18.199368       1 main.go:227] handling current node
	I0706 20:49:18.199380       1 main.go:223] Handling node with IPs: map[172.29.79.241:{}]
	I0706 20:49:18.199410       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 20:49:28.206102       1 main.go:223] Handling node with IPs: map[172.29.70.202:{}]
	I0706 20:49:28.206145       1 main.go:227] handling current node
	I0706 20:49:28.206156       1 main.go:223] Handling node with IPs: map[172.29.79.241:{}]
	I0706 20:49:28.206162       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [67b35d14730a] <==
	* I0706 20:46:32.233161       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0706 20:46:32.233167       1 cache.go:39] Caches are synced for autoregister controller
	I0706 20:46:32.252107       1 controller.go:624] quota admission added evaluator for: namespaces
	I0706 20:46:32.282223       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0706 20:46:32.292051       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0706 20:46:32.294595       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0706 20:46:32.293179       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0706 20:46:32.293267       1 shared_informer.go:318] Caches are synced for configmaps
	I0706 20:46:32.325616       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0706 20:46:32.719629       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0706 20:46:33.105656       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0706 20:46:33.115047       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0706 20:46:33.115082       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0706 20:46:34.195231       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0706 20:46:34.277481       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0706 20:46:34.443530       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0706 20:46:34.460967       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.29.70.202]
	I0706 20:46:34.462405       1 controller.go:624] quota admission added evaluator for: endpoints
	I0706 20:46:34.469238       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0706 20:46:35.156393       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0706 20:46:35.813065       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0706 20:46:35.835113       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0706 20:46:35.854273       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0706 20:46:48.585497       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0706 20:46:49.029538       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [9deab8b718f3] <==
	* I0706 20:46:48.764024       1 shared_informer.go:318] Caches are synced for resource quota
	I0706 20:46:48.786779       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-multinode-144300" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0706 20:46:48.786805       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-multinode-144300" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0706 20:46:48.786814       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-multinode-144300" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0706 20:46:48.786821       1 event.go:307] "Event occurred" object="kube-system/etcd-multinode-144300" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0706 20:46:48.875251       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0706 20:46:48.987615       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-zgwbn"
	I0706 20:46:49.034622       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-m7j99"
	I0706 20:46:49.094253       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-h6h62"
	I0706 20:46:49.102497       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 20:46:49.102525       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0706 20:46:49.113215       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 20:46:49.126090       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-9pjnm"
	I0706 20:46:49.207804       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-zgwbn"
	I0706 20:47:03.731244       1 node_lifecycle_controller.go:1046] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0706 20:48:24.488911       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-144300-m02\" does not exist"
	I0706 20:48:24.523091       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-z6sjf"
	I0706 20:48:24.538476       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f5vmt"
	I0706 20:48:24.561694       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-144300-m02" podCIDRs=[10.244.1.0/24]
	I0706 20:48:28.746279       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-144300-m02"
	I0706 20:48:28.746341       1 event.go:307] "Event occurred" object="multinode-144300-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-144300-m02 event: Registered Node multinode-144300-m02 in Controller"
	W0706 20:48:39.304147       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	I0706 20:48:50.630613       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0706 20:48:50.671519       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-qp6pw"
	I0706 20:48:50.712414       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-47tnt"
	
	* 
	* ==> kube-proxy [b92d8760a51f] <==
	* I0706 20:46:50.479465       1 node.go:141] Successfully retrieved node IP: 172.29.70.202
	I0706 20:46:50.479762       1 server_others.go:110] "Detected node IP" address="172.29.70.202"
	I0706 20:46:50.479793       1 server_others.go:554] "Using iptables proxy"
	I0706 20:46:50.544792       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0706 20:46:50.544848       1 server_others.go:192] "Using iptables Proxier"
	I0706 20:46:50.546832       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0706 20:46:50.548167       1 server.go:658] "Version info" version="v1.27.3"
	I0706 20:46:50.548186       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 20:46:50.551347       1 config.go:188] "Starting service config controller"
	I0706 20:46:50.551435       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0706 20:46:50.552682       1 config.go:97] "Starting endpoint slice config controller"
	I0706 20:46:50.552860       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0706 20:46:50.575014       1 config.go:315] "Starting node config controller"
	I0706 20:46:50.575070       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0706 20:46:50.652948       1 shared_informer.go:318] Caches are synced for service config
	I0706 20:46:50.653084       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0706 20:46:50.675520       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [775dc0b6d0dc] <==
	* W0706 20:46:33.144110       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0706 20:46:33.144427       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0706 20:46:33.202806       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0706 20:46:33.202847       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0706 20:46:33.303483       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0706 20:46:33.303591       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0706 20:46:33.487559       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0706 20:46:33.487607       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0706 20:46:33.498950       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0706 20:46:33.498972       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0706 20:46:33.501016       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0706 20:46:33.501208       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0706 20:46:33.529401       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0706 20:46:33.529427       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0706 20:46:33.573133       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0706 20:46:33.573673       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0706 20:46:33.600606       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0706 20:46:33.600636       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0706 20:46:33.701202       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0706 20:46:33.701307       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0706 20:46:33.753911       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0706 20:46:33.754185       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0706 20:46:33.761519       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0706 20:46:33.761859       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0706 20:46:35.361091       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-07-06 20:45:31 UTC, ends at Thu 2023-07-06 20:49:29 UTC. --
	Jul 06 20:46:53 multinode-144300 kubelet[2568]: I0706 20:46:53.801605    2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1aec25071ed95ceab5ad2f058d046ff022e22c0e235055cb86c2dd5bb0738a3"
	Jul 06 20:46:56 multinode-144300 kubelet[2568]: I0706 20:46:56.203168    2568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-h6h62" podStartSLOduration=7.203132488 podCreationTimestamp="2023-07-06 20:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 20:46:53.834210553 +0000 UTC m=+18.066061223" watchObservedRunningTime="2023-07-06 20:46:56.203132488 +0000 UTC m=+20.434983158"
	Jul 06 20:47:01 multinode-144300 kubelet[2568]: I0706 20:47:01.892092    2568 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 06 20:47:01 multinode-144300 kubelet[2568]: I0706 20:47:01.937238    2568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-9pjnm" podStartSLOduration=10.23908452 podCreationTimestamp="2023-07-06 20:46:49 +0000 UTC" firstStartedPulling="2023-07-06 20:46:53.808213703 +0000 UTC m=+18.040064373" lastFinishedPulling="2023-07-06 20:46:56.506209417 +0000 UTC m=+20.738060087" observedRunningTime="2023-07-06 20:46:57.87089417 +0000 UTC m=+22.102744940" watchObservedRunningTime="2023-07-06 20:47:01.937080234 +0000 UTC m=+26.168930904"
	Jul 06 20:47:01 multinode-144300 kubelet[2568]: I0706 20:47:01.937563    2568 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 20:47:01 multinode-144300 kubelet[2568]: I0706 20:47:01.950558    2568 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 20:47:02 multinode-144300 kubelet[2568]: I0706 20:47:02.110931    2568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxkmf\" (UniqueName: \"kubernetes.io/projected/75b208e7-5f24-4849-867c-c7fa45213999-kube-api-access-bxkmf\") pod \"storage-provisioner\" (UID: \"75b208e7-5f24-4849-867c-c7fa45213999\") " pod="kube-system/storage-provisioner"
	Jul 06 20:47:02 multinode-144300 kubelet[2568]: I0706 20:47:02.111082    2568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/75b208e7-5f24-4849-867c-c7fa45213999-tmp\") pod \"storage-provisioner\" (UID: \"75b208e7-5f24-4849-867c-c7fa45213999\") " pod="kube-system/storage-provisioner"
	Jul 06 20:47:02 multinode-144300 kubelet[2568]: I0706 20:47:02.111113    2568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfa019d5-9528-4f25-8aab-03d1d276bb0c-config-volume\") pod \"coredns-5d78c9869d-m7j99\" (UID: \"dfa019d5-9528-4f25-8aab-03d1d276bb0c\") " pod="kube-system/coredns-5d78c9869d-m7j99"
	Jul 06 20:47:02 multinode-144300 kubelet[2568]: I0706 20:47:02.111139    2568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxzvs\" (UniqueName: \"kubernetes.io/projected/dfa019d5-9528-4f25-8aab-03d1d276bb0c-kube-api-access-rxzvs\") pod \"coredns-5d78c9869d-m7j99\" (UID: \"dfa019d5-9528-4f25-8aab-03d1d276bb0c\") " pod="kube-system/coredns-5d78c9869d-m7j99"
	Jul 06 20:47:03 multinode-144300 kubelet[2568]: I0706 20:47:03.126302    2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ae69054a21be0224464bfc9174b0c4e86fc1c3db2b23e0d17aa253115389088"
	Jul 06 20:47:03 multinode-144300 kubelet[2568]: I0706 20:47:03.150173    2568 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791a2e3d6abe6e60c374bd60fbec95df6306e7744ebd4671b0d4a568a7a3a146"
	Jul 06 20:47:04 multinode-144300 kubelet[2568]: I0706 20:47:04.215029    2568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.214988744 podCreationTimestamp="2023-07-06 20:46:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 20:47:04.195869319 +0000 UTC m=+28.427719989" watchObservedRunningTime="2023-07-06 20:47:04.214988744 +0000 UTC m=+28.446839414"
	Jul 06 20:47:36 multinode-144300 kubelet[2568]: E0706 20:47:36.285656    2568 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 06 20:47:36 multinode-144300 kubelet[2568]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 06 20:47:36 multinode-144300 kubelet[2568]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 06 20:47:36 multinode-144300 kubelet[2568]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 06 20:48:36 multinode-144300 kubelet[2568]: E0706 20:48:36.285692    2568 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 06 20:48:36 multinode-144300 kubelet[2568]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 06 20:48:36 multinode-144300 kubelet[2568]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 06 20:48:36 multinode-144300 kubelet[2568]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 06 20:48:50 multinode-144300 kubelet[2568]: I0706 20:48:50.770518    2568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-m7j99" podStartSLOduration=122.770475168 podCreationTimestamp="2023-07-06 20:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 20:47:04.215412458 +0000 UTC m=+28.447263228" watchObservedRunningTime="2023-07-06 20:48:50.770475168 +0000 UTC m=+135.002325938"
	Jul 06 20:48:50 multinode-144300 kubelet[2568]: I0706 20:48:50.770758    2568 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 20:48:50 multinode-144300 kubelet[2568]: I0706 20:48:50.895265    2568 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwbl6\" (UniqueName: \"kubernetes.io/projected/12f3117a-c156-4909-8cd7-117df3106624-kube-api-access-bwbl6\") pod \"busybox-67b7f59bb-47tnt\" (UID: \"12f3117a-c156-4909-8cd7-117df3106624\") " pod="default/busybox-67b7f59bb-47tnt"
	Jul 06 20:48:54 multinode-144300 kubelet[2568]: I0706 20:48:54.368326    2568 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-67b7f59bb-47tnt" podStartSLOduration=3.362801554 podCreationTimestamp="2023-07-06 20:48:50 +0000 UTC" firstStartedPulling="2023-07-06 20:48:52.082624902 +0000 UTC m=+136.314475672" lastFinishedPulling="2023-07-06 20:48:53.088111225 +0000 UTC m=+137.319961895" observedRunningTime="2023-07-06 20:48:54.36726277 +0000 UTC m=+138.599113440" watchObservedRunningTime="2023-07-06 20:48:54.368287777 +0000 UTC m=+138.600138547"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-144300 -n multinode-144300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-144300 -n multinode-144300: (4.535468s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-144300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (36.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (311.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-144300
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-144300
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-144300: (54.1719596s)
multinode_test.go:295: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-144300 --wait=true -v=8 --alsologtostderr
E0706 20:56:31.246546    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:58:31.908909    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:59:56.193744    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
multinode_test.go:295: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-144300 --wait=true -v=8 --alsologtostderr: (4m0.8773575s)
multinode_test.go:300: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-144300
multinode_test.go:307: reported node list is not the same after restart. Before restart: multinode-144300	172.29.70.202
multinode-144300-m02	172.29.79.241
multinode-144300-m03	172.29.66.123

                                                
                                                
After restart: multinode-144300	172.29.78.0
multinode-144300-m02	172.29.74.65
multinode-144300-m03	172.29.78.173
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-144300 -n multinode-144300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-144300 -n multinode-144300: (4.5290986s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 logs -n 25: (4.4157164s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-144300 ssh -n                                                                                                  | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:52 UTC | 06 Jul 23 20:52 UTC |
	|         | multinode-144300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-144300 cp multinode-144300-m02:/home/docker/cp-test.txt                                                        | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:52 UTC | 06 Jul 23 20:52 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2544889846\001\cp-test_multinode-144300-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n                                                                                                  | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:52 UTC | 06 Jul 23 20:52 UTC |
	|         | multinode-144300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-144300 cp multinode-144300-m02:/home/docker/cp-test.txt                                                        | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:52 UTC | 06 Jul 23 20:52 UTC |
	|         | multinode-144300:/home/docker/cp-test_multinode-144300-m02_multinode-144300.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n                                                                                                  | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:52 UTC | 06 Jul 23 20:52 UTC |
	|         | multinode-144300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n multinode-144300 sudo cat                                                                        | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:52 UTC | 06 Jul 23 20:52 UTC |
	|         | /home/docker/cp-test_multinode-144300-m02_multinode-144300.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-144300 cp multinode-144300-m02:/home/docker/cp-test.txt                                                        | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:52 UTC | 06 Jul 23 20:52 UTC |
	|         | multinode-144300-m03:/home/docker/cp-test_multinode-144300-m02_multinode-144300-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n                                                                                                  | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:52 UTC | 06 Jul 23 20:52 UTC |
	|         | multinode-144300-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n multinode-144300-m03 sudo cat                                                                    | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:52 UTC | 06 Jul 23 20:52 UTC |
	|         | /home/docker/cp-test_multinode-144300-m02_multinode-144300-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-144300 cp testdata\cp-test.txt                                                                                 | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	|         | multinode-144300-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n                                                                                                  | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	|         | multinode-144300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-144300 cp multinode-144300-m03:/home/docker/cp-test.txt                                                        | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2544889846\001\cp-test_multinode-144300-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n                                                                                                  | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	|         | multinode-144300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-144300 cp multinode-144300-m03:/home/docker/cp-test.txt                                                        | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	|         | multinode-144300:/home/docker/cp-test_multinode-144300-m03_multinode-144300.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n                                                                                                  | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	|         | multinode-144300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n multinode-144300 sudo cat                                                                        | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	|         | /home/docker/cp-test_multinode-144300-m03_multinode-144300.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-144300 cp multinode-144300-m03:/home/docker/cp-test.txt                                                        | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	|         | multinode-144300-m02:/home/docker/cp-test_multinode-144300-m03_multinode-144300-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n                                                                                                  | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	|         | multinode-144300-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-144300 ssh -n multinode-144300-m02 sudo cat                                                                    | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	|         | /home/docker/cp-test_multinode-144300-m03_multinode-144300-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-144300 node stop m03                                                                                           | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:53 UTC | 06 Jul 23 20:53 UTC |
	| node    | multinode-144300 node start                                                                                              | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:54 UTC | 06 Jul 23 20:55 UTC |
	|         | m03 --alsologtostderr                                                                                                    |                  |                   |         |                     |                     |
	| node    | list -p multinode-144300                                                                                                 | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:55 UTC |                     |
	| stop    | -p multinode-144300                                                                                                      | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:55 UTC | 06 Jul 23 20:56 UTC |
	| start   | -p multinode-144300                                                                                                      | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:56 UTC | 06 Jul 23 21:00 UTC |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-144300                                                                                                 | multinode-144300 | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:00 UTC |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 20:56:29
	Running on machine: minikube6
	Binary: Built with gc go1.20.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 20:56:29.279983    8620 out.go:296] Setting OutFile to fd 856 ...
	I0706 20:56:29.336153    8620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:56:29.336245    8620 out.go:309] Setting ErrFile to fd 832...
	I0706 20:56:29.336245    8620 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:56:29.354502    8620 out.go:303] Setting JSON to false
	I0706 20:56:29.357291    8620 start.go:127] hostinfo: {"hostname":"minikube6","uptime":495126,"bootTime":1688181863,"procs":143,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 20:56:29.357291    8620 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 20:56:29.363624    8620 out.go:177] * [multinode-144300] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 20:56:29.367276    8620 notify.go:220] Checking for updates...
	I0706 20:56:29.369548    8620 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:56:29.372498    8620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 20:56:29.376699    8620 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 20:56:29.379450    8620 out.go:177]   - MINIKUBE_LOCATION=16832
	I0706 20:56:29.381889    8620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 20:56:29.385002    8620 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:56:29.385412    8620 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 20:56:30.830835    8620 out.go:177] * Using the hyperv driver based on existing profile
	I0706 20:56:30.833619    8620 start.go:297] selected driver: hyperv
	I0706 20:56:30.833848    8620 start.go:944] validating driver "hyperv" against &{Name:multinode-144300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.70.202 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.79.241 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.66.123 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inac
cel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:56:30.834142    8620 start.go:955] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 20:56:30.878092    8620 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0706 20:56:30.878092    8620 cni.go:84] Creating CNI manager for ""
	I0706 20:56:30.878092    8620 cni.go:137] 3 nodes found, recommending kindnet
	I0706 20:56:30.878092    8620 start_flags.go:319] config:
	{Name:multinode-144300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-144300 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.70.202 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.79.241 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.66.123 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false ist
io-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0}
	I0706 20:56:30.879280    8620 iso.go:125] acquiring lock: {Name:mk7608081596fbafb2a8a8984d26768bdf174467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 20:56:30.882706    8620 out.go:177] * Starting control plane node multinode-144300 in cluster multinode-144300
	I0706 20:56:30.886986    8620 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 20:56:30.886986    8620 preload.go:148] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0706 20:56:30.887695    8620 cache.go:57] Caching tarball of preloaded images
	I0706 20:56:30.887891    8620 preload.go:174] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0706 20:56:30.888149    8620 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 20:56:30.888412    8620 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:56:30.890026    8620 start.go:365] acquiring machines lock for multinode-144300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 20:56:30.890026    8620 start.go:369] acquired machines lock for "multinode-144300" in 0s
	I0706 20:56:30.891089    8620 start.go:96] Skipping create...Using existing machine configuration
	I0706 20:56:30.891089    8620 fix.go:54] fixHost starting: 
	I0706 20:56:30.891280    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:31.548098    8620 main.go:141] libmachine: [stdout =====>] : Off
	
	I0706 20:56:31.548343    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:31.548343    8620 fix.go:102] recreateIfNeeded on multinode-144300: state=Stopped err=<nil>
	W0706 20:56:31.548454    8620 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 20:56:31.552204    8620 out.go:177] * Restarting existing hyperv VM for "multinode-144300" ...
	I0706 20:56:31.556604    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-144300
	I0706 20:56:33.013901    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:56:33.013901    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:33.014049    8620 main.go:141] libmachine: Waiting for host to start...
	I0706 20:56:33.014084    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:33.663199    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:33.663199    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:33.663276    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:34.580549    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:56:34.580549    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:35.584891    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:36.232406    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:36.232640    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:36.232703    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:37.168588    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:56:37.168588    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:38.183444    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:38.816691    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:38.816728    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:38.816892    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:39.787703    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:56:39.787703    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:40.800305    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:41.467187    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:41.467546    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:41.467546    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:42.374294    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:56:42.374469    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:43.376019    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:44.022692    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:44.022692    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:44.022870    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:44.941563    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:56:44.941563    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:45.943589    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:46.592273    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:46.592323    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:46.592358    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:47.530671    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:56:47.530671    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:48.534424    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:49.184670    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:49.184980    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:49.184980    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:50.090625    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:56:50.090847    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:51.092406    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:51.760471    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:51.760471    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:51.760578    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:52.670434    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:56:52.670434    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:53.674327    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:54.341734    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:54.344899    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:54.345005    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:55.371564    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:56:55.371883    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:55.374589    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:56.035544    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:56.035644    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:56.035644    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:56.977187    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:56:56.977187    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:56.977863    8620 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:56:56.980271    8620 machine.go:88] provisioning docker machine ...
	I0706 20:56:56.980381    8620 buildroot.go:166] provisioning hostname "multinode-144300"
	I0706 20:56:56.980455    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:57.628923    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:57.628923    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:57.628923    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:56:58.558362    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:56:58.558483    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:58.561823    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:56:58.563228    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.0 22 <nil> <nil>}
	I0706 20:56:58.563764    8620 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-144300 && echo "multinode-144300" | sudo tee /etc/hostname
	I0706 20:56:58.723815    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-144300
	
	I0706 20:56:58.723871    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:56:59.381138    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:56:59.381138    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:56:59.381221    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:00.297722    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:00.297829    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:00.302519    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:57:00.303620    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.0 22 <nil> <nil>}
	I0706 20:57:00.303620    8620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-144300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-144300/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-144300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 20:57:00.455279    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 20:57:00.455312    8620 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0706 20:57:00.455418    8620 buildroot.go:174] setting up certificates
	I0706 20:57:00.455468    8620 provision.go:83] configureAuth start
	I0706 20:57:00.455539    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:01.100137    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:01.100137    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:01.100137    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:02.064907    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:02.064907    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:02.064907    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:02.697667    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:02.697745    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:02.697874    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:03.643035    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:03.643035    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:03.643469    8620 provision.go:138] copyHostCerts
	I0706 20:57:03.643622    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0706 20:57:03.643622    8620 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0706 20:57:03.643622    8620 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0706 20:57:03.644226    8620 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0706 20:57:03.645457    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0706 20:57:03.645658    8620 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0706 20:57:03.645854    8620 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0706 20:57:03.646155    8620 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0706 20:57:03.647009    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0706 20:57:03.647009    8620 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0706 20:57:03.647009    8620 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0706 20:57:03.647691    8620 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0706 20:57:03.648301    8620 provision.go:112] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-144300 san=[172.29.78.0 172.29.78.0 localhost 127.0.0.1 minikube multinode-144300]
	I0706 20:57:03.720487    8620 provision.go:172] copyRemoteCerts
	I0706 20:57:03.728607    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 20:57:03.728607    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:04.372742    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:04.372925    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:04.372925    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:05.303574    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:05.303574    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:05.303574    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:57:05.407300    8620 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.6786812s)
	I0706 20:57:05.407300    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0706 20:57:05.407685    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 20:57:05.443453    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0706 20:57:05.443836    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1224 bytes)
	I0706 20:57:05.479236    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0706 20:57:05.479606    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0706 20:57:05.513573    8620 provision.go:86] duration metric: configureAuth took 5.0580024s
	I0706 20:57:05.513573    8620 buildroot.go:189] setting minikube options for container-runtime
	I0706 20:57:05.514207    8620 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:57:05.514448    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:06.164129    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:06.164129    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:06.164129    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:07.114060    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:07.114152    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:07.117933    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:57:07.118851    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.0 22 <nil> <nil>}
	I0706 20:57:07.118851    8620 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 20:57:07.245431    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 20:57:07.245504    8620 buildroot.go:70] root file system type: tmpfs
	I0706 20:57:07.245504    8620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 20:57:07.245504    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:07.886979    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:07.886979    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:07.887087    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:08.850085    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:08.850085    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:08.853795    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:57:08.854458    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.0 22 <nil> <nil>}
	I0706 20:57:08.854458    8620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 20:57:08.998345    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 20:57:08.998478    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:09.637441    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:09.637441    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:09.637441    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:10.565954    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:10.565954    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:10.569496    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:57:10.570155    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.0 22 <nil> <nil>}
	I0706 20:57:10.570155    8620 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 20:57:11.837577    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 20:57:11.837650    8620 machine.go:91] provisioned docker machine in 14.8572703s
	I0706 20:57:11.837711    8620 start.go:300] post-start starting for "multinode-144300" (driver="hyperv")
	I0706 20:57:11.837711    8620 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 20:57:11.847088    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 20:57:11.847612    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:12.485083    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:12.485083    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:12.485083    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:13.414104    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:13.414104    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:13.414104    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:57:13.524848    8620 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.6777473s)
	I0706 20:57:13.535069    8620 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 20:57:13.541077    8620 command_runner.go:130] > NAME=Buildroot
	I0706 20:57:13.541077    8620 command_runner.go:130] > VERSION=2021.02.12-1-g6f2898e-dirty
	I0706 20:57:13.541077    8620 command_runner.go:130] > ID=buildroot
	I0706 20:57:13.541077    8620 command_runner.go:130] > VERSION_ID=2021.02.12
	I0706 20:57:13.541077    8620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0706 20:57:13.541077    8620 info.go:137] Remote host: Buildroot 2021.02.12
	I0706 20:57:13.541077    8620 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0706 20:57:13.541077    8620 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0706 20:57:13.542565    8620 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> 82562.pem in /etc/ssl/certs
	I0706 20:57:13.542630    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /etc/ssl/certs/82562.pem
	I0706 20:57:13.551007    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 20:57:13.564482    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /etc/ssl/certs/82562.pem (1708 bytes)
	I0706 20:57:13.598147    8620 start.go:303] post-start completed in 1.7603611s
	I0706 20:57:13.598147    8620 fix.go:56] fixHost completed within 42.7067466s
	I0706 20:57:13.598237    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:14.236119    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:14.236119    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:14.236261    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:15.167013    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:15.167013    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:15.170843    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:57:15.171603    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.0 22 <nil> <nil>}
	I0706 20:57:15.171603    8620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0706 20:57:15.296385    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688677035.296933458
	
	I0706 20:57:15.296385    8620 fix.go:206] guest clock: 1688677035.296933458
	I0706 20:57:15.296385    8620 fix.go:219] Guest: 2023-07-06 20:57:15.296933458 +0000 UTC Remote: 2023-07-06 20:57:13.5981474 +0000 UTC m=+44.394578401 (delta=1.698786058s)
	I0706 20:57:15.296385    8620 fix.go:190] guest clock delta is within tolerance: 1.698786058s
	I0706 20:57:15.296385    8620 start.go:83] releasing machines lock for "multinode-144300", held for 44.4050816s
	I0706 20:57:15.296385    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:15.922392    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:15.922392    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:15.922540    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:16.869631    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:16.869699    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:16.872771    8620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 20:57:16.872947    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:16.880847    8620 ssh_runner.go:195] Run: cat /version.json
	I0706 20:57:16.880847    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:57:17.600480    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:17.600480    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:17.600722    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:17.600722    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:17.600722    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:17.600894    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:18.639790    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:18.639790    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:18.640189    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:57:18.657025    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:57:18.657025    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:18.657428    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:57:18.731049    8620 command_runner.go:130] > {"iso_version": "v1.30.1-1688144767-16765", "kicbase_version": "v0.0.39-1687538068-16731", "minikube_version": "v1.30.1", "commit": "ea1fcc3c7b384862404a5ec9a04bec1496959f9b"}
	I0706 20:57:18.731104    8620 ssh_runner.go:235] Completed: cat /version.json: (1.8502438s)
	I0706 20:57:18.740644    8620 ssh_runner.go:195] Run: systemctl --version
	I0706 20:57:18.846850    8620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0706 20:57:18.846850    8620 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (1.974001s)
	I0706 20:57:18.846850    8620 command_runner.go:130] > systemd 247 (247)
	I0706 20:57:18.846850    8620 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0706 20:57:18.855763    8620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0706 20:57:18.864331    8620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0706 20:57:18.864579    8620 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 20:57:18.873054    8620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0706 20:57:18.893625    8620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0706 20:57:18.894470    8620 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0706 20:57:18.894518    8620 start.go:466] detecting cgroup driver to use...
	I0706 20:57:18.894805    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 20:57:18.921076    8620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0706 20:57:18.930159    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0706 20:57:18.954922    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 20:57:18.970435    8620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 20:57:18.979385    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 20:57:19.001718    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 20:57:19.024607    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 20:57:19.047134    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 20:57:19.074095    8620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 20:57:19.097111    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 20:57:19.121296    8620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 20:57:19.135376    8620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0706 20:57:19.144107    8620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 20:57:19.166666    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:57:19.303375    8620 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 20:57:19.326220    8620 start.go:466] detecting cgroup driver to use...
	I0706 20:57:19.334216    8620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 20:57:19.359106    8620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0706 20:57:19.359106    8620 command_runner.go:130] > [Unit]
	I0706 20:57:19.359106    8620 command_runner.go:130] > Description=Docker Application Container Engine
	I0706 20:57:19.359106    8620 command_runner.go:130] > Documentation=https://docs.docker.com
	I0706 20:57:19.359106    8620 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0706 20:57:19.359106    8620 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0706 20:57:19.359106    8620 command_runner.go:130] > StartLimitBurst=3
	I0706 20:57:19.359106    8620 command_runner.go:130] > StartLimitIntervalSec=60
	I0706 20:57:19.359106    8620 command_runner.go:130] > [Service]
	I0706 20:57:19.359106    8620 command_runner.go:130] > Type=notify
	I0706 20:57:19.359106    8620 command_runner.go:130] > Restart=on-failure
	I0706 20:57:19.359106    8620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0706 20:57:19.359106    8620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0706 20:57:19.359106    8620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0706 20:57:19.359106    8620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0706 20:57:19.359106    8620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0706 20:57:19.359106    8620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0706 20:57:19.359106    8620 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0706 20:57:19.359106    8620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0706 20:57:19.359106    8620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0706 20:57:19.359106    8620 command_runner.go:130] > ExecStart=
	I0706 20:57:19.359106    8620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0706 20:57:19.359106    8620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0706 20:57:19.359106    8620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0706 20:57:19.359106    8620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0706 20:57:19.359106    8620 command_runner.go:130] > LimitNOFILE=infinity
	I0706 20:57:19.359106    8620 command_runner.go:130] > LimitNPROC=infinity
	I0706 20:57:19.359106    8620 command_runner.go:130] > LimitCORE=infinity
	I0706 20:57:19.359106    8620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0706 20:57:19.359106    8620 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0706 20:57:19.359106    8620 command_runner.go:130] > TasksMax=infinity
	I0706 20:57:19.359106    8620 command_runner.go:130] > TimeoutStartSec=0
	I0706 20:57:19.359106    8620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0706 20:57:19.359106    8620 command_runner.go:130] > Delegate=yes
	I0706 20:57:19.359106    8620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0706 20:57:19.359106    8620 command_runner.go:130] > KillMode=process
	I0706 20:57:19.359106    8620 command_runner.go:130] > [Install]
	I0706 20:57:19.359106    8620 command_runner.go:130] > WantedBy=multi-user.target
	I0706 20:57:19.368834    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 20:57:19.393648    8620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 20:57:19.419732    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 20:57:19.442722    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 20:57:19.469717    8620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0706 20:57:19.523874    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 20:57:19.540822    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 20:57:19.563842    8620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0706 20:57:19.575517    8620 ssh_runner.go:195] Run: which cri-dockerd
	I0706 20:57:19.580893    8620 command_runner.go:130] > /usr/bin/cri-dockerd
	I0706 20:57:19.589509    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 20:57:19.603736    8620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 20:57:19.641288    8620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 20:57:19.782500    8620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 20:57:19.909817    8620 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 20:57:19.909913    8620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 20:57:19.946531    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:57:20.089415    8620 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 20:57:21.718530    8620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6290705s)
	I0706 20:57:21.728839    8620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 20:57:21.868506    8620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0706 20:57:22.005639    8620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 20:57:22.141860    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:57:22.279015    8620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0706 20:57:22.308986    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:57:22.439258    8620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0706 20:57:22.531120    8620 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0706 20:57:22.540784    8620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0706 20:57:22.548070    8620 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0706 20:57:22.548132    8620 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0706 20:57:22.548132    8620 command_runner.go:130] > Device: 16h/22d	Inode: 881         Links: 1
	I0706 20:57:22.548132    8620 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0706 20:57:22.548132    8620 command_runner.go:130] > Access: 2023-07-06 20:57:22.459171092 +0000
	I0706 20:57:22.548132    8620 command_runner.go:130] > Modify: 2023-07-06 20:57:22.459171092 +0000
	I0706 20:57:22.548185    8620 command_runner.go:130] > Change: 2023-07-06 20:57:22.462171287 +0000
	I0706 20:57:22.548185    8620 command_runner.go:130] >  Birth: -
	I0706 20:57:22.548185    8620 start.go:534] Will wait 60s for crictl version
	I0706 20:57:22.557709    8620 ssh_runner.go:195] Run: which crictl
	I0706 20:57:22.562287    8620 command_runner.go:130] > /usr/bin/crictl
	I0706 20:57:22.572104    8620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0706 20:57:22.619508    8620 command_runner.go:130] > Version:  0.1.0
	I0706 20:57:22.619508    8620 command_runner.go:130] > RuntimeName:  docker
	I0706 20:57:22.619508    8620 command_runner.go:130] > RuntimeVersion:  24.0.2
	I0706 20:57:22.619508    8620 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0706 20:57:22.622349    8620 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0706 20:57:22.628702    8620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 20:57:22.658155    8620 command_runner.go:130] > 24.0.2
	I0706 20:57:22.674569    8620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 20:57:22.708242    8620 command_runner.go:130] > 24.0.2
	I0706 20:57:22.714167    8620 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0706 20:57:22.714167    8620 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0706 20:57:22.720149    8620 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0706 20:57:22.720149    8620 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0706 20:57:22.720149    8620 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0706 20:57:22.720149    8620 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:76:79 Flags:up|broadcast|multicast|running}
	I0706 20:57:22.722141    8620 ip.go:210] interface addr: fe80::9492:57c6:5513:d3cc/64
	I0706 20:57:22.722141    8620 ip.go:210] interface addr: 172.29.64.1/20
	I0706 20:57:22.731141    8620 ssh_runner.go:195] Run: grep 172.29.64.1	host.minikube.internal$ /etc/hosts
	I0706 20:57:22.736108    8620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 20:57:22.752609    8620 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 20:57:22.758140    8620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 20:57:22.785746    8620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.3
	I0706 20:57:22.785823    8620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.3
	I0706 20:57:22.785823    8620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.3
	I0706 20:57:22.785823    8620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.3
	I0706 20:57:22.785823    8620 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0706 20:57:22.786029    8620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0706 20:57:22.786112    8620 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0706 20:57:22.786132    8620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0706 20:57:22.786132    8620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 20:57:22.786132    8620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0706 20:57:22.786196    8620 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0706 20:57:22.786285    8620 docker.go:566] Images already preloaded, skipping extraction
	I0706 20:57:22.793392    8620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 20:57:22.816489    8620 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.27.3
	I0706 20:57:22.816550    8620 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.27.3
	I0706 20:57:22.816550    8620 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.27.3
	I0706 20:57:22.816550    8620 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.27.3
	I0706 20:57:22.816592    8620 command_runner.go:130] > kindest/kindnetd:v20230511-dc714da8
	I0706 20:57:22.816592    8620 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.10.1
	I0706 20:57:22.816641    8620 command_runner.go:130] > registry.k8s.io/etcd:3.5.7-0
	I0706 20:57:22.816705    8620 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0706 20:57:22.816732    8620 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0706 20:57:22.816732    8620 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0706 20:57:22.816794    8620 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0706 20:57:22.816853    8620 cache_images.go:84] Images are preloaded, skipping loading
	I0706 20:57:22.823919    8620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0706 20:57:22.851955    8620 command_runner.go:130] > cgroupfs
	I0706 20:57:22.852968    8620 cni.go:84] Creating CNI manager for ""
	I0706 20:57:22.852968    8620 cni.go:137] 3 nodes found, recommending kindnet
	I0706 20:57:22.852968    8620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0706 20:57:22.852968    8620 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.78.0 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-144300 NodeName:multinode-144300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.78.0"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.78.0 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0706 20:57:22.852968    8620 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.78.0
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-144300"
	  kubeletExtraArgs:
	    node-ip: 172.29.78.0
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.78.0"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0706 20:57:22.853506    8620 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-144300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.78.0
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0706 20:57:22.861353    8620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0706 20:57:22.875628    8620 command_runner.go:130] > kubeadm
	I0706 20:57:22.875628    8620 command_runner.go:130] > kubectl
	I0706 20:57:22.875701    8620 command_runner.go:130] > kubelet
	I0706 20:57:22.875701    8620 binaries.go:44] Found k8s binaries, skipping transfer
	I0706 20:57:22.884554    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0706 20:57:22.896968    8620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0706 20:57:22.919953    8620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0706 20:57:22.941805    8620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0706 20:57:22.976145    8620 ssh_runner.go:195] Run: grep 172.29.78.0	control-plane.minikube.internal$ /etc/hosts
	I0706 20:57:22.981632    8620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.78.0	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 20:57:22.998293    8620 certs.go:56] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300 for IP: 172.29.78.0
	I0706 20:57:22.998293    8620 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:57:22.999140    8620 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0706 20:57:22.999337    8620 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0706 20:57:23.000310    8620 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\client.key
	I0706 20:57:23.000447    8620 certs.go:319] generating minikube signed cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key.53d78020
	I0706 20:57:23.000473    8620 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt.53d78020 with IP's: [172.29.78.0 10.96.0.1 127.0.0.1 10.0.0.1]
	I0706 20:57:23.413782    8620 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt.53d78020 ...
	I0706 20:57:23.413782    8620 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt.53d78020: {Name:mk205548af5e1093befea1585746a2fe73cc64df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:57:23.416819    8620 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key.53d78020 ...
	I0706 20:57:23.416819    8620 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key.53d78020: {Name:mk9410a6c931b53ef22be91fbf2462b85934c53e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:57:23.417751    8620 certs.go:337] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt.53d78020 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt
	I0706 20:57:23.429840    8620 certs.go:341] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key.53d78020 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key
	I0706 20:57:23.431143    8620 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.key
	I0706 20:57:23.431143    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0706 20:57:23.431407    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0706 20:57:23.431554    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0706 20:57:23.432246    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0706 20:57:23.432452    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0706 20:57:23.432602    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0706 20:57:23.432602    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0706 20:57:23.432602    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0706 20:57:23.433372    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem (1338 bytes)
	W0706 20:57:23.433572    8620 certs.go:433] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256_empty.pem, impossibly tiny 0 bytes
	I0706 20:57:23.433572    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0706 20:57:23.433572    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0706 20:57:23.434238    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0706 20:57:23.434533    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0706 20:57:23.435341    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem (1708 bytes)
	I0706 20:57:23.435546    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:57:23.435546    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem -> /usr/share/ca-certificates/8256.pem
	I0706 20:57:23.435546    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /usr/share/ca-certificates/82562.pem
	I0706 20:57:23.436191    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0706 20:57:23.473927    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0706 20:57:23.505497    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0706 20:57:23.539721    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0706 20:57:23.574986    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0706 20:57:23.610908    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0706 20:57:23.646105    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0706 20:57:23.678472    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0706 20:57:23.712944    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0706 20:57:23.745750    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem --> /usr/share/ca-certificates/8256.pem (1338 bytes)
	I0706 20:57:23.777735    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /usr/share/ca-certificates/82562.pem (1708 bytes)
	I0706 20:57:23.811822    8620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0706 20:57:23.848400    8620 ssh_runner.go:195] Run: openssl version
	I0706 20:57:23.855076    8620 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0706 20:57:23.863786    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0706 20:57:23.887091    8620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:57:23.893077    8620 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:57:23.893077    8620 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:57:23.900860    8620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:57:23.907853    8620 command_runner.go:130] > b5213941
	I0706 20:57:23.917676    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0706 20:57:23.941827    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8256.pem && ln -fs /usr/share/ca-certificates/8256.pem /etc/ssl/certs/8256.pem"
	I0706 20:57:23.964734    8620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8256.pem
	I0706 20:57:23.970274    8620 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 20:57:23.970274    8620 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 20:57:23.979160    8620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8256.pem
	I0706 20:57:23.986118    8620 command_runner.go:130] > 51391683
	I0706 20:57:23.994163    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8256.pem /etc/ssl/certs/51391683.0"
	I0706 20:57:24.016139    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82562.pem && ln -fs /usr/share/ca-certificates/82562.pem /etc/ssl/certs/82562.pem"
	I0706 20:57:24.038154    8620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82562.pem
	I0706 20:57:24.044250    8620 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 20:57:24.044250    8620 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 20:57:24.053118    8620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82562.pem
	I0706 20:57:24.064201    8620 command_runner.go:130] > 3ec20f2e
	I0706 20:57:24.075452    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/82562.pem /etc/ssl/certs/3ec20f2e.0"
	I0706 20:57:24.099244    8620 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0706 20:57:24.106077    8620 command_runner.go:130] > ca.crt
	I0706 20:57:24.106077    8620 command_runner.go:130] > ca.key
	I0706 20:57:24.106077    8620 command_runner.go:130] > healthcheck-client.crt
	I0706 20:57:24.106077    8620 command_runner.go:130] > healthcheck-client.key
	I0706 20:57:24.106077    8620 command_runner.go:130] > peer.crt
	I0706 20:57:24.106077    8620 command_runner.go:130] > peer.key
	I0706 20:57:24.106077    8620 command_runner.go:130] > server.crt
	I0706 20:57:24.106077    8620 command_runner.go:130] > server.key
	I0706 20:57:24.115178    8620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0706 20:57:24.123309    8620 command_runner.go:130] > Certificate will not expire
	I0706 20:57:24.132432    8620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0706 20:57:24.139961    8620 command_runner.go:130] > Certificate will not expire
	I0706 20:57:24.148627    8620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0706 20:57:24.157237    8620 command_runner.go:130] > Certificate will not expire
	I0706 20:57:24.166654    8620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0706 20:57:24.174044    8620 command_runner.go:130] > Certificate will not expire
	I0706 20:57:24.182843    8620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0706 20:57:24.190619    8620 command_runner.go:130] > Certificate will not expire
	I0706 20:57:24.198935    8620 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0706 20:57:24.209538    8620 command_runner.go:130] > Certificate will not expire
	I0706 20:57:24.209988    8620 kubeadm.go:404] StartCluster: {Name:multinode-144300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.78.0 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.79.241 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.66.123 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingres
s:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:57:24.217723    8620 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0706 20:57:24.249522    8620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0706 20:57:24.265533    8620 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0706 20:57:24.265533    8620 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0706 20:57:24.266351    8620 command_runner.go:130] > /var/lib/minikube/etcd:
	I0706 20:57:24.266351    8620 command_runner.go:130] > member
	I0706 20:57:24.266351    8620 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0706 20:57:24.266414    8620 kubeadm.go:636] restartCluster start
	I0706 20:57:24.275223    8620 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0706 20:57:24.289235    8620 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0706 20:57:24.290363    8620 kubeconfig.go:135] verify returned: extract IP: "multinode-144300" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:57:24.290363    8620 kubeconfig.go:146] "multinode-144300" context is missing from C:\Users\jenkins.minikube6\minikube-integration\kubeconfig - will repair!
	I0706 20:57:24.291292    8620 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:57:24.303503    8620 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:57:24.304509    8620 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.78.0:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 20:57:24.306502    8620 cert_rotation.go:137] Starting client certificate rotation controller
	I0706 20:57:24.315240    8620 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0706 20:57:24.329995    8620 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0706 20:57:24.330045    8620 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0706 20:57:24.330045    8620 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0706 20:57:24.330045    8620 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0706 20:57:24.330045    8620 command_runner.go:130] >  kind: InitConfiguration
	I0706 20:57:24.330045    8620 command_runner.go:130] >  localAPIEndpoint:
	I0706 20:57:24.330094    8620 command_runner.go:130] > -  advertiseAddress: 172.29.70.202
	I0706 20:57:24.330094    8620 command_runner.go:130] > +  advertiseAddress: 172.29.78.0
	I0706 20:57:24.330094    8620 command_runner.go:130] >    bindPort: 8443
	I0706 20:57:24.330142    8620 command_runner.go:130] >  bootstrapTokens:
	I0706 20:57:24.330142    8620 command_runner.go:130] >    - groups:
	I0706 20:57:24.330142    8620 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0706 20:57:24.330142    8620 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0706 20:57:24.330187    8620 command_runner.go:130] >    name: "multinode-144300"
	I0706 20:57:24.330187    8620 command_runner.go:130] >    kubeletExtraArgs:
	I0706 20:57:24.330187    8620 command_runner.go:130] > -    node-ip: 172.29.70.202
	I0706 20:57:24.330187    8620 command_runner.go:130] > +    node-ip: 172.29.78.0
	I0706 20:57:24.330187    8620 command_runner.go:130] >    taints: []
	I0706 20:57:24.330187    8620 command_runner.go:130] >  ---
	I0706 20:57:24.330238    8620 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0706 20:57:24.330238    8620 command_runner.go:130] >  kind: ClusterConfiguration
	I0706 20:57:24.330238    8620 command_runner.go:130] >  apiServer:
	I0706 20:57:24.330296    8620 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.29.70.202"]
	I0706 20:57:24.330296    8620 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.29.78.0"]
	I0706 20:57:24.330296    8620 command_runner.go:130] >    extraArgs:
	I0706 20:57:24.330296    8620 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0706 20:57:24.330296    8620 command_runner.go:130] >  controllerManager:
	I0706 20:57:24.330386    8620 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.29.70.202
	+  advertiseAddress: 172.29.78.0
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-144300"
	   kubeletExtraArgs:
	-    node-ip: 172.29.70.202
	+    node-ip: 172.29.78.0
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.29.70.202"]
	+  certSANs: ["127.0.0.1", "localhost", "172.29.78.0"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0706 20:57:24.330386    8620 kubeadm.go:1128] stopping kube-system containers ...
	I0706 20:57:24.337376    8620 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0706 20:57:24.363055    8620 command_runner.go:130] > 7d425ac2e145
	I0706 20:57:24.363996    8620 command_runner.go:130] > d9e48f8643f4
	I0706 20:57:24.363996    8620 command_runner.go:130] > 4ae69054a21b
	I0706 20:57:24.363996    8620 command_runner.go:130] > 791a2e3d6abe
	I0706 20:57:24.363996    8620 command_runner.go:130] > 2ec34877e4ac
	I0706 20:57:24.363996    8620 command_runner.go:130] > b92d8760a51f
	I0706 20:57:24.363996    8620 command_runner.go:130] > c1aec25071ed
	I0706 20:57:24.363996    8620 command_runner.go:130] > eec796df46db
	I0706 20:57:24.363996    8620 command_runner.go:130] > 775dc0b6d0dc
	I0706 20:57:24.363996    8620 command_runner.go:130] > f7157ce4715f
	I0706 20:57:24.363996    8620 command_runner.go:130] > 67b35d14730a
	I0706 20:57:24.363996    8620 command_runner.go:130] > 9deab8b718f3
	I0706 20:57:24.363996    8620 command_runner.go:130] > 04380a3faf91
	I0706 20:57:24.363996    8620 command_runner.go:130] > f4d2e1b10e79
	I0706 20:57:24.363996    8620 command_runner.go:130] > 9fe4ead7e3fa
	I0706 20:57:24.363996    8620 command_runner.go:130] > 9bc6a4a0228f
	I0706 20:57:24.363996    8620 docker.go:462] Stopping containers: [7d425ac2e145 d9e48f8643f4 4ae69054a21b 791a2e3d6abe 2ec34877e4ac b92d8760a51f c1aec25071ed eec796df46db 775dc0b6d0dc f7157ce4715f 67b35d14730a 9deab8b718f3 04380a3faf91 f4d2e1b10e79 9fe4ead7e3fa 9bc6a4a0228f]
	I0706 20:57:24.370052    8620 ssh_runner.go:195] Run: docker stop 7d425ac2e145 d9e48f8643f4 4ae69054a21b 791a2e3d6abe 2ec34877e4ac b92d8760a51f c1aec25071ed eec796df46db 775dc0b6d0dc f7157ce4715f 67b35d14730a 9deab8b718f3 04380a3faf91 f4d2e1b10e79 9fe4ead7e3fa 9bc6a4a0228f
	I0706 20:57:24.396013    8620 command_runner.go:130] > 7d425ac2e145
	I0706 20:57:24.396013    8620 command_runner.go:130] > d9e48f8643f4
	I0706 20:57:24.396013    8620 command_runner.go:130] > 4ae69054a21b
	I0706 20:57:24.396013    8620 command_runner.go:130] > 791a2e3d6abe
	I0706 20:57:24.396013    8620 command_runner.go:130] > 2ec34877e4ac
	I0706 20:57:24.396013    8620 command_runner.go:130] > b92d8760a51f
	I0706 20:57:24.396013    8620 command_runner.go:130] > c1aec25071ed
	I0706 20:57:24.396013    8620 command_runner.go:130] > eec796df46db
	I0706 20:57:24.396013    8620 command_runner.go:130] > 775dc0b6d0dc
	I0706 20:57:24.396013    8620 command_runner.go:130] > f7157ce4715f
	I0706 20:57:24.396013    8620 command_runner.go:130] > 67b35d14730a
	I0706 20:57:24.396013    8620 command_runner.go:130] > 9deab8b718f3
	I0706 20:57:24.396013    8620 command_runner.go:130] > 04380a3faf91
	I0706 20:57:24.396013    8620 command_runner.go:130] > f4d2e1b10e79
	I0706 20:57:24.396013    8620 command_runner.go:130] > 9fe4ead7e3fa
	I0706 20:57:24.396013    8620 command_runner.go:130] > 9bc6a4a0228f
	I0706 20:57:24.407233    8620 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0706 20:57:24.441771    8620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0706 20:57:24.454523    8620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0706 20:57:24.455588    8620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0706 20:57:24.455588    8620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0706 20:57:24.455588    8620 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0706 20:57:24.455842    8620 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0706 20:57:24.464597    8620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0706 20:57:24.477364    8620 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0706 20:57:24.477469    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 20:57:24.853991    8620 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0706 20:57:24.854039    8620 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0706 20:57:24.854039    8620 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0706 20:57:24.854039    8620 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0706 20:57:24.854039    8620 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0706 20:57:24.854039    8620 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0706 20:57:24.854107    8620 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0706 20:57:24.854107    8620 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0706 20:57:24.854107    8620 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0706 20:57:24.854107    8620 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0706 20:57:24.854156    8620 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0706 20:57:24.854156    8620 command_runner.go:130] > [certs] Using the existing "sa" key
	I0706 20:57:24.854156    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 20:57:26.140210    8620 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0706 20:57:26.140277    8620 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0706 20:57:26.140277    8620 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0706 20:57:26.140277    8620 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0706 20:57:26.140277    8620 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0706 20:57:26.140277    8620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2860385s)
	I0706 20:57:26.140277    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0706 20:57:26.358753    8620 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0706 20:57:26.358809    8620 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0706 20:57:26.358809    8620 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0706 20:57:26.358898    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 20:57:26.449246    8620 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0706 20:57:26.449320    8620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0706 20:57:26.449320    8620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0706 20:57:26.449320    8620 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0706 20:57:26.449374    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0706 20:57:26.531063    8620 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0706 20:57:26.531063    8620 api_server.go:52] waiting for apiserver process to appear ...
	I0706 20:57:26.541615    8620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:57:27.088650    8620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:57:27.572417    8620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:57:28.076950    8620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:57:28.582209    8620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:57:29.074881    8620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:57:29.581069    8620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:57:30.079570    8620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:57:30.097192    8620 command_runner.go:130] > 1884
	I0706 20:57:30.098196    8620 api_server.go:72] duration metric: took 3.5671071s to wait for apiserver process to appear ...
	I0706 20:57:30.098196    8620 api_server.go:88] waiting for apiserver healthz status ...
	I0706 20:57:30.098196    8620 api_server.go:253] Checking apiserver healthz at https://172.29.78.0:8443/healthz ...
	I0706 20:57:33.313771    8620 api_server.go:279] https://172.29.78.0:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0706 20:57:33.314293    8620 api_server.go:103] status: https://172.29.78.0:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0706 20:57:33.825649    8620 api_server.go:253] Checking apiserver healthz at https://172.29.78.0:8443/healthz ...
	I0706 20:57:33.838379    8620 api_server.go:279] https://172.29.78.0:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0706 20:57:33.838379    8620 api_server.go:103] status: https://172.29.78.0:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0706 20:57:34.329577    8620 api_server.go:253] Checking apiserver healthz at https://172.29.78.0:8443/healthz ...
	I0706 20:57:34.343838    8620 api_server.go:279] https://172.29.78.0:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0706 20:57:34.344523    8620 api_server.go:103] status: https://172.29.78.0:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0706 20:57:34.823640    8620 api_server.go:253] Checking apiserver healthz at https://172.29.78.0:8443/healthz ...
	I0706 20:57:34.854715    8620 api_server.go:279] https://172.29.78.0:8443/healthz returned 200:
	ok
	I0706 20:57:34.854794    8620 round_trippers.go:463] GET https://172.29.78.0:8443/version
	I0706 20:57:34.854794    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:34.854794    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:34.854794    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:34.870433    8620 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0706 20:57:34.870433    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:34.870433    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:34.870433    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:34.870433    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:34.870433    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:34.870433    8620 round_trippers.go:580]     Content-Length: 263
	I0706 20:57:34.870433    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:34 GMT
	I0706 20:57:34.870433    8620 round_trippers.go:580]     Audit-Id: 9739c2ad-e0e3-4e04-a38b-c2557626e991
	I0706 20:57:34.870433    8620 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0706 20:57:34.870433    8620 api_server.go:141] control plane version: v1.27.3
	I0706 20:57:34.870433    8620 api_server.go:131] duration metric: took 4.7722024s to wait for apiserver health ...
	I0706 20:57:34.871021    8620 cni.go:84] Creating CNI manager for ""
	I0706 20:57:34.871021    8620 cni.go:137] 3 nodes found, recommending kindnet
	I0706 20:57:34.873879    8620 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0706 20:57:34.886994    8620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0706 20:57:34.900761    8620 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0706 20:57:34.900813    8620 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0706 20:57:34.900889    8620 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0706 20:57:34.900926    8620 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0706 20:57:34.900926    8620 command_runner.go:130] > Access: 2023-07-06 20:56:55.006854400 +0000
	I0706 20:57:34.901000    8620 command_runner.go:130] > Modify: 2023-06-30 22:28:30.000000000 +0000
	I0706 20:57:34.901029    8620 command_runner.go:130] > Change: 2023-07-06 20:56:46.220000000 +0000
	I0706 20:57:34.901029    8620 command_runner.go:130] >  Birth: -
	I0706 20:57:34.901029    8620 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0706 20:57:34.901029    8620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0706 20:57:34.957859    8620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0706 20:57:36.790686    8620 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0706 20:57:36.790783    8620 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0706 20:57:36.790783    8620 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0706 20:57:36.790817    8620 command_runner.go:130] > daemonset.apps/kindnet configured
	I0706 20:57:36.790817    8620 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.8329454s)
	I0706 20:57:36.790935    8620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0706 20:57:36.790935    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods
	I0706 20:57:36.790935    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:36.790935    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:36.790935    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:36.796768    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:36.796768    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:36.796768    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:36.796768    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:36.796768    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:36 GMT
	I0706 20:57:36.796768    8620 round_trippers.go:580]     Audit-Id: 6affc1a8-fee0-458c-b450-cd571885ab3c
	I0706 20:57:36.796768    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:36.796768    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:36.798814    8620 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1190"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83864 chars]
	I0706 20:57:36.805497    8620 system_pods.go:59] 12 kube-system pods found
	I0706 20:57:36.805497    8620 system_pods.go:61] "coredns-5d78c9869d-m7j99" [dfa019d5-9528-4f25-8aab-03d1d276bb0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0706 20:57:36.805497    8620 system_pods.go:61] "etcd-multinode-144300" [3cf71374-8b9f-4bee-a5a7-538dcf09ed5e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0706 20:57:36.805497    8620 system_pods.go:61] "kindnet-9pjnm" [85523421-1320-4587-ba8c-cbb357ee7eb1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0706 20:57:36.805497    8620 system_pods.go:61] "kindnet-jhjpn" [873ba5ea-0975-4046-ac70-7f652703f7c6] Running
	I0706 20:57:36.805497    8620 system_pods.go:61] "kindnet-z6sjf" [c2828b0f-72bb-4203-ab44-280e4de85926] Running
	I0706 20:57:36.805497    8620 system_pods.go:61] "kube-apiserver-multinode-144300" [c3e05753-1404-4779-b0dd-d7bf63b44bdd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0706 20:57:36.805497    8620 system_pods.go:61] "kube-controller-manager-multinode-144300" [d9a60269-68e9-4ea2-82fe-63cedee225ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0706 20:57:36.805497    8620 system_pods.go:61] "kube-proxy-f5vmt" [e615de7b-b4a0-4060-aecd-0581b032227d] Running
	I0706 20:57:36.805497    8620 system_pods.go:61] "kube-proxy-h6h62" [6949ff1e-f5c0-4ab2-ae7f-6b30775e220d] Running
	I0706 20:57:36.806101    8620 system_pods.go:61] "kube-proxy-x7bwf" [3326b20f-277b-435c-8b7e-7d305167affb] Running
	I0706 20:57:36.806101    8620 system_pods.go:61] "kube-scheduler-multinode-144300" [70e904dd-fca0-436e-84d9-101fbc1cd9b0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0706 20:57:36.806144    8620 system_pods.go:61] "storage-provisioner" [75b208e7-5f24-4849-867c-c7fa45213999] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0706 20:57:36.806165    8620 system_pods.go:74] duration metric: took 15.2088ms to wait for pod list to return data ...
	I0706 20:57:36.806165    8620 node_conditions.go:102] verifying NodePressure condition ...
	I0706 20:57:36.806223    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes
	I0706 20:57:36.806223    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:36.806223    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:36.806223    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:36.811141    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:36.811141    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:36.811141    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:36 GMT
	I0706 20:57:36.811141    8620 round_trippers.go:580]     Audit-Id: b734a974-569b-45bf-be6a-b86bba13f6d4
	I0706 20:57:36.811141    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:36.811141    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:36.811141    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:36.811141    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:36.811845    8620 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1190"},"items":[{"metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 13750 chars]
	I0706 20:57:36.813418    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:57:36.813463    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 20:57:36.813510    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:57:36.813510    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 20:57:36.813510    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:57:36.813510    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 20:57:36.813510    8620 node_conditions.go:105] duration metric: took 7.3448ms to run NodePressure ...
	I0706 20:57:36.813575    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 20:57:37.154933    8620 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0706 20:57:37.154933    8620 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0706 20:57:37.154933    8620 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0706 20:57:37.154933    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0706 20:57:37.154933    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.154933    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.154933    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.161710    8620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:57:37.161710    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.161710    8620 round_trippers.go:580]     Audit-Id: dd93d5b9-8f51-4ec3-a4b6-6b8497a1f428
	I0706 20:57:37.161710    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.161710    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.161710    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.161710    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.161710    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.162845    8620 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1192"},"items":[{"metadata":{"name":"etcd-multinode-144300","namespace":"kube-system","uid":"3cf71374-8b9f-4bee-a5a7-538dcf09ed5e","resourceVersion":"1166","creationTimestamp":"2023-07-06T20:57:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.78.0:2379","kubernetes.io/config.hash":"fcb31617214a528ab159c24a1103b7af","kubernetes.io/config.mirror":"fcb31617214a528ab159c24a1103b7af","kubernetes.io/config.seen":"2023-07-06T20:57:27.010845433Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 29296 chars]
	I0706 20:57:37.164989    8620 kubeadm.go:787] kubelet initialised
	I0706 20:57:37.165036    8620 kubeadm.go:788] duration metric: took 10.1029ms waiting for restarted kubelet to initialise ...
	I0706 20:57:37.165080    8620 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 20:57:37.165105    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods
	I0706 20:57:37.165105    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.165105    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.165105    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.169938    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:37.169938    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.169938    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.169938    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.169938    8620 round_trippers.go:580]     Audit-Id: bb7a1520-b763-49f8-8dbc-fd061a465b37
	I0706 20:57:37.169938    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.169938    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.169938    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.172049    8620 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1192"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83864 chars]
	I0706 20:57:37.175471    8620 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:37.175617    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:37.175617    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.175617    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.175617    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.179262    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:37.179262    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.179670    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.179670    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.179670    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.179670    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.179670    8620 round_trippers.go:580]     Audit-Id: 4ef0ea6f-89b8-4f8f-a76a-5d214456637c
	I0706 20:57:37.179670    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.179670    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:37.180337    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:37.180337    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.180337    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.180337    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.188790    8620 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0706 20:57:37.188790    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.188790    8620 round_trippers.go:580]     Audit-Id: 356de17a-1071-4a85-86c4-9c81d98464b5
	I0706 20:57:37.188790    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.188790    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.188790    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.188790    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.188790    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.188790    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:37.190288    8620 pod_ready.go:97] node "multinode-144300" hosting pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:37.190288    8620 pod_ready.go:81] duration metric: took 14.67ms waiting for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	E0706 20:57:37.190288    8620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-144300" hosting pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:37.190346    8620 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:37.190434    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-144300
	I0706 20:57:37.190467    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.190467    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.190493    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.194853    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:37.194853    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.194853    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.194853    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.194853    8620 round_trippers.go:580]     Audit-Id: 08617293-90e5-4ab4-b72f-4a29953ef549
	I0706 20:57:37.194853    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.194853    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.194853    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.194853    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-144300","namespace":"kube-system","uid":"3cf71374-8b9f-4bee-a5a7-538dcf09ed5e","resourceVersion":"1166","creationTimestamp":"2023-07-06T20:57:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.78.0:2379","kubernetes.io/config.hash":"fcb31617214a528ab159c24a1103b7af","kubernetes.io/config.mirror":"fcb31617214a528ab159c24a1103b7af","kubernetes.io/config.seen":"2023-07-06T20:57:27.010845433Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6067 chars]
	I0706 20:57:37.196082    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:37.196139    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.196139    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.196139    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.199391    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:37.199391    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.199391    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.199391    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.199391    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.199391    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.199391    8620 round_trippers.go:580]     Audit-Id: 7d27d99f-f7c1-4d98-b26f-22284319f542
	I0706 20:57:37.199391    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.199391    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:37.200683    8620 pod_ready.go:97] node "multinode-144300" hosting pod "etcd-multinode-144300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:37.200719    8620 pod_ready.go:81] duration metric: took 10.3735ms waiting for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	E0706 20:57:37.200785    8620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-144300" hosting pod "etcd-multinode-144300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:37.200785    8620 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:37.200850    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-144300
	I0706 20:57:37.200850    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.200850    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.200850    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.204593    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:37.204593    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.204593    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.204593    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.204593    8620 round_trippers.go:580]     Audit-Id: 090694c8-f0db-472f-978f-726e61c7906e
	I0706 20:57:37.204593    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.204593    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.204593    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.204593    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-144300","namespace":"kube-system","uid":"c3e05753-1404-4779-b0dd-d7bf63b44bdd","resourceVersion":"1163","creationTimestamp":"2023-07-06T20:57:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.78.0:8443","kubernetes.io/config.hash":"39dd3fec037ddc9365ef4418fd161ea0","kubernetes.io/config.mirror":"39dd3fec037ddc9365ef4418fd161ea0","kubernetes.io/config.seen":"2023-07-06T20:57:27.010850733Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7626 chars]
	I0706 20:57:37.205592    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:37.205621    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.205621    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.205621    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.209922    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:37.209922    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.209922    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.209922    8620 round_trippers.go:580]     Audit-Id: c0ef9afb-07af-4606-b666-9665a20b4fdb
	I0706 20:57:37.209922    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.209922    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.209922    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.209922    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.210964    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:37.211514    8620 pod_ready.go:97] node "multinode-144300" hosting pod "kube-apiserver-multinode-144300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:37.211514    8620 pod_ready.go:81] duration metric: took 10.7291ms waiting for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	E0706 20:57:37.211514    8620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-144300" hosting pod "kube-apiserver-multinode-144300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:37.211514    8620 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:37.211587    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-144300
	I0706 20:57:37.211587    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.211697    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.211697    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.213956    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:37.213956    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.213956    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.213956    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.213956    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.214975    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.214975    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.215008    8620 round_trippers.go:580]     Audit-Id: 8d82e1b3-8575-4c13-85d9-124331a29684
	I0706 20:57:37.215284    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-144300","namespace":"kube-system","uid":"d9a60269-68e9-4ea2-82fe-63cedee225ef","resourceVersion":"1111","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.mirror":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.seen":"2023-07-06T20:46:36.035686687Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7429 chars]
	I0706 20:57:37.215840    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:37.215868    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.215868    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.215868    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.219221    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:37.219221    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.219221    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.219221    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.219221    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.219221    8620 round_trippers.go:580]     Audit-Id: 4fa1b297-41b3-491a-94df-9c8120e0fcd9
	I0706 20:57:37.219221    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.219221    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.220786    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:37.221257    8620 pod_ready.go:97] node "multinode-144300" hosting pod "kube-controller-manager-multinode-144300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:37.221356    8620 pod_ready.go:81] duration metric: took 9.7725ms waiting for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	E0706 20:57:37.221356    8620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-144300" hosting pod "kube-controller-manager-multinode-144300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:37.221356    8620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f5vmt" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:37.399180    8620 request.go:628] Waited for 177.7502ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5vmt
	I0706 20:57:37.399426    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5vmt
	I0706 20:57:37.399426    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.399426    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.399426    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.403339    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:37.403976    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.403976    8620 round_trippers.go:580]     Audit-Id: 64eb816c-be1e-44a4-93b7-25f156cb2a9e
	I0706 20:57:37.403976    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.403976    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.403976    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.403976    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.404045    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.404445    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f5vmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"e615de7b-b4a0-4060-aecd-0581b032227d","resourceVersion":"567","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0706 20:57:37.601698    8620 request.go:628] Waited for 196.3699ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:57:37.602179    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:57:37.602179    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.602179    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.602179    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.606096    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:37.606096    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.606096    8620 round_trippers.go:580]     Audit-Id: 0a83293c-79f9-440e-bd52-505bf4e08be0
	I0706 20:57:37.606096    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.606096    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.606096    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.606096    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.606096    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.606096    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"963","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
	I0706 20:57:37.606804    8620 pod_ready.go:92] pod "kube-proxy-f5vmt" in "kube-system" namespace has status "Ready":"True"
	I0706 20:57:37.606804    8620 pod_ready.go:81] duration metric: took 385.4141ms waiting for pod "kube-proxy-f5vmt" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:37.606804    8620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:37.803141    8620 request.go:628] Waited for 195.968ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 20:57:37.803141    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 20:57:37.803141    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:37.803406    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:37.803406    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:37.806892    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:37.806892    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:37.806892    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:37 GMT
	I0706 20:57:37.806892    8620 round_trippers.go:580]     Audit-Id: 3b754031-612e-4060-b7e5-94e7568d6fd2
	I0706 20:57:37.807141    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:37.807141    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:37.807141    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:37.807141    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:37.807373    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h6h62","generateName":"kube-proxy-","namespace":"kube-system","uid":"6949ff1e-f5c0-4ab2-ae7f-6b30775e220d","resourceVersion":"1170","creationTimestamp":"2023-07-06T20:46:49Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0706 20:57:38.005695    8620 request.go:628] Waited for 197.4696ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:38.005869    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:38.005869    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:38.005869    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:38.005869    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:38.009942    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:38.009942    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:38.009942    8620 round_trippers.go:580]     Audit-Id: d03e0100-99d5-4c4e-8156-9ba693eac91f
	I0706 20:57:38.010416    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:38.010416    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:38.010416    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:38.010416    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:38.010416    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:38 GMT
	I0706 20:57:38.010694    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:38.011242    8620 pod_ready.go:97] node "multinode-144300" hosting pod "kube-proxy-h6h62" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:38.011242    8620 pod_ready.go:81] duration metric: took 404.3718ms waiting for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	E0706 20:57:38.011242    8620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-144300" hosting pod "kube-proxy-h6h62" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:38.011242    8620 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-x7bwf" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:38.193334    8620 request.go:628] Waited for 181.7168ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7bwf
	I0706 20:57:38.193419    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7bwf
	I0706 20:57:38.193419    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:38.193419    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:38.193576    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:38.197944    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:38.198021    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:38.198021    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:38.198021    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:38.198021    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:38 GMT
	I0706 20:57:38.198021    8620 round_trippers.go:580]     Audit-Id: 9488a926-2f38-40dc-a03d-bcf0219ee661
	I0706 20:57:38.198021    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:38.198021    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:38.198312    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x7bwf","generateName":"kube-proxy-","namespace":"kube-system","uid":"3326b20f-277b-435c-8b7e-7d305167affb","resourceVersion":"1074","creationTimestamp":"2023-07-06T20:50:55Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:50:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0706 20:57:38.400059    8620 request.go:628] Waited for 200.8385ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 20:57:38.400233    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 20:57:38.400324    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:38.400353    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:38.400380    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:38.403940    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:38.403940    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:38.404283    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:38 GMT
	I0706 20:57:38.404283    8620 round_trippers.go:580]     Audit-Id: a31fe4b2-cd35-4834-95cd-4809211c7ee4
	I0706 20:57:38.404283    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:38.404283    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:38.404283    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:38.404283    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:38.404593    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"9147f70e-3f8f-4f6c-98f8-6e9530ca9678","resourceVersion":"1089","creationTimestamp":"2023-07-06T20:55:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3084 chars]
	I0706 20:57:38.405153    8620 pod_ready.go:92] pod "kube-proxy-x7bwf" in "kube-system" namespace has status "Ready":"True"
	I0706 20:57:38.405218    8620 pod_ready.go:81] duration metric: took 393.9731ms waiting for pod "kube-proxy-x7bwf" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:38.405218    8620 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:38.602125    8620 request.go:628] Waited for 196.6458ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 20:57:38.602252    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 20:57:38.602252    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:38.602252    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:38.602252    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:38.605626    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:38.606640    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:38.606640    8620 round_trippers.go:580]     Audit-Id: 67924ba5-e8cf-46bd-bfd5-943006ca6120
	I0706 20:57:38.606640    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:38.606640    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:38.606640    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:38.606640    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:38.606640    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:38 GMT
	I0706 20:57:38.606640    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-144300","namespace":"kube-system","uid":"70e904dd-fca0-436e-84d9-101fbc1cd9b0","resourceVersion":"1112","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.mirror":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.seen":"2023-07-06T20:46:36.035687887Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5141 chars]
	I0706 20:57:38.805518    8620 request.go:628] Waited for 197.2743ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:38.805813    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:38.805872    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:38.805872    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:38.805872    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:38.809301    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:38.809356    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:38.809356    8620 round_trippers.go:580]     Audit-Id: dc332a0d-00a2-4d4d-98eb-c03e8e7f8152
	I0706 20:57:38.809356    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:38.809356    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:38.809356    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:38.809356    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:38.809356    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:38 GMT
	I0706 20:57:38.809495    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:38.810102    8620 pod_ready.go:97] node "multinode-144300" hosting pod "kube-scheduler-multinode-144300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:38.810102    8620 pod_ready.go:81] duration metric: took 404.8343ms waiting for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	E0706 20:57:38.810102    8620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-144300" hosting pod "kube-scheduler-multinode-144300" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300" has status "Ready":"False"
	I0706 20:57:38.810102    8620 pod_ready.go:38] duration metric: took 1.6450098s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 20:57:38.810102    8620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0706 20:57:38.835507    8620 command_runner.go:130] > -16
	I0706 20:57:38.835594    8620 ops.go:34] apiserver oom_adj: -16
	I0706 20:57:38.835594    8620 kubeadm.go:640] restartCluster took 14.5690728s
	I0706 20:57:38.835594    8620 kubeadm.go:406] StartCluster complete in 14.6256033s
	I0706 20:57:38.835718    8620 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:57:38.835900    8620 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:57:38.837336    8620 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:57:38.838941    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0706 20:57:38.839000    8620 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0706 20:57:38.845435    8620 out.go:177] * Enabled addons: 
	I0706 20:57:38.839402    8620 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:57:38.860458    8620 addons.go:499] enable addons completed in 21.4581ms: enabled=[]
	I0706 20:57:38.850412    8620 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:57:38.861418    8620 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.78.0:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 20:57:38.862420    8620 cert_rotation.go:137] Starting client certificate rotation controller
	I0706 20:57:38.863494    8620 round_trippers.go:463] GET https://172.29.78.0:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0706 20:57:38.863494    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:38.863494    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:38.863494    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:38.879274    8620 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0706 20:57:38.880050    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:38.880050    8620 round_trippers.go:580]     Audit-Id: 8e4844fb-19e7-4a85-855f-7b0af823f6b2
	I0706 20:57:38.880050    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:38.880050    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:38.880050    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:38.880136    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:38.880136    8620 round_trippers.go:580]     Content-Length: 292
	I0706 20:57:38.880136    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:38 GMT
	I0706 20:57:38.880291    8620 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b60ac185-d6d8-49e6-a9b3-f1d59eb7807f","resourceVersion":"1191","creationTimestamp":"2023-07-06T20:46:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0706 20:57:38.880422    8620 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-144300" context rescaled to 1 replicas
	I0706 20:57:38.880422    8620 start.go:223] Will wait 6m0s for node &{Name: IP:172.29.78.0 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 20:57:38.886852    8620 out.go:177] * Verifying Kubernetes components...
	I0706 20:57:38.901071    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:57:39.018720    8620 command_runner.go:130] > apiVersion: v1
	I0706 20:57:39.018770    8620 command_runner.go:130] > data:
	I0706 20:57:39.018770    8620 command_runner.go:130] >   Corefile: |
	I0706 20:57:39.018770    8620 command_runner.go:130] >     .:53 {
	I0706 20:57:39.018770    8620 command_runner.go:130] >         log
	I0706 20:57:39.018837    8620 command_runner.go:130] >         errors
	I0706 20:57:39.018837    8620 command_runner.go:130] >         health {
	I0706 20:57:39.018837    8620 command_runner.go:130] >            lameduck 5s
	I0706 20:57:39.018837    8620 command_runner.go:130] >         }
	I0706 20:57:39.018837    8620 command_runner.go:130] >         ready
	I0706 20:57:39.018886    8620 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0706 20:57:39.018886    8620 command_runner.go:130] >            pods insecure
	I0706 20:57:39.018886    8620 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0706 20:57:39.018886    8620 command_runner.go:130] >            ttl 30
	I0706 20:57:39.018886    8620 command_runner.go:130] >         }
	I0706 20:57:39.018886    8620 command_runner.go:130] >         prometheus :9153
	I0706 20:57:39.018886    8620 command_runner.go:130] >         hosts {
	I0706 20:57:39.018938    8620 command_runner.go:130] >            172.29.64.1 host.minikube.internal
	I0706 20:57:39.018938    8620 command_runner.go:130] >            fallthrough
	I0706 20:57:39.018938    8620 command_runner.go:130] >         }
	I0706 20:57:39.018938    8620 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0706 20:57:39.018938    8620 command_runner.go:130] >            max_concurrent 1000
	I0706 20:57:39.018938    8620 command_runner.go:130] >         }
	I0706 20:57:39.018938    8620 command_runner.go:130] >         cache 30
	I0706 20:57:39.019018    8620 command_runner.go:130] >         loop
	I0706 20:57:39.019018    8620 command_runner.go:130] >         reload
	I0706 20:57:39.019018    8620 command_runner.go:130] >         loadbalance
	I0706 20:57:39.019018    8620 command_runner.go:130] >     }
	I0706 20:57:39.019018    8620 command_runner.go:130] > kind: ConfigMap
	I0706 20:57:39.019018    8620 command_runner.go:130] > metadata:
	I0706 20:57:39.019018    8620 command_runner.go:130] >   creationTimestamp: "2023-07-06T20:46:35Z"
	I0706 20:57:39.019018    8620 command_runner.go:130] >   name: coredns
	I0706 20:57:39.019018    8620 command_runner.go:130] >   namespace: kube-system
	I0706 20:57:39.019018    8620 command_runner.go:130] >   resourceVersion: "400"
	I0706 20:57:39.019018    8620 command_runner.go:130] >   uid: d3d70a42-f7a8-414d-bd9e-06da3ba34172
	I0706 20:57:39.019018    8620 node_ready.go:35] waiting up to 6m0s for node "multinode-144300" to be "Ready" ...
	I0706 20:57:39.019018    8620 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0706 20:57:39.019018    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:39.019018    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:39.019018    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:39.019018    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:39.026830    8620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0706 20:57:39.026830    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:39.026830    8620 round_trippers.go:580]     Audit-Id: e14caadd-69a1-4d7f-93ed-8cc7a59a4ddc
	I0706 20:57:39.026830    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:39.026830    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:39.026830    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:39.026830    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:39.026830    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:39 GMT
	I0706 20:57:39.026830    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:39.528711    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:39.528777    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:39.528777    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:39.528844    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:39.536654    8620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0706 20:57:39.536654    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:39.536654    8620 round_trippers.go:580]     Audit-Id: 6c9ebacc-1444-4869-9783-522d9735db47
	I0706 20:57:39.536654    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:39.536654    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:39.536654    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:39.536654    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:39.536654    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:39 GMT
	I0706 20:57:39.536654    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:40.029735    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:40.029735    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:40.029735    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:40.029735    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:40.033311    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:40.033311    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:40.033311    8620 round_trippers.go:580]     Audit-Id: fc37b9e5-2154-4916-b2ab-5ee025a11d09
	I0706 20:57:40.033311    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:40.033311    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:40.033311    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:40.033775    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:40.033775    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:40 GMT
	I0706 20:57:40.034619    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:40.528975    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:40.528975    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:40.528975    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:40.528975    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:40.534434    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:40.534434    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:40.534434    8620 round_trippers.go:580]     Audit-Id: 63d19bd6-55a4-4b49-a7ba-de28f44be89d
	I0706 20:57:40.534434    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:40.534687    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:40.534687    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:40.534687    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:40.534687    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:40 GMT
	I0706 20:57:40.535296    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:41.028717    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:41.028825    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:41.028825    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:41.028825    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:41.036462    8620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0706 20:57:41.036462    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:41.036516    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:41.036516    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:41.036516    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:41.036516    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:41 GMT
	I0706 20:57:41.036516    8620 round_trippers.go:580]     Audit-Id: 521c216b-1fa2-4fbc-8300-d7cc8eadb410
	I0706 20:57:41.036565    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:41.036705    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:41.037655    8620 node_ready.go:58] node "multinode-144300" has status "Ready":"False"
	I0706 20:57:41.528935    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:41.528935    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:41.528935    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:41.528935    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:41.531351    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:41.531351    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:41.531351    8620 round_trippers.go:580]     Audit-Id: bbab0e3e-9007-42a8-bc2a-da12cf80b48f
	I0706 20:57:41.532107    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:41.532107    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:41.532107    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:41.532107    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:41.532107    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:41 GMT
	I0706 20:57:41.532440    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:42.028768    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:42.028853    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:42.028853    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:42.028853    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:42.032942    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:42.033500    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:42.033500    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:42.033500    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:42.033559    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:42.033559    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:42.033559    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:42 GMT
	I0706 20:57:42.033559    8620 round_trippers.go:580]     Audit-Id: 075801a7-3f76-451e-b266-16ec66f2f56e
	I0706 20:57:42.033903    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:42.544253    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:42.544253    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:42.544333    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:42.544333    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:42.547000    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:42.547000    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:42.547000    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:42.547000    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:42.547000    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:42.547000    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:42 GMT
	I0706 20:57:42.547000    8620 round_trippers.go:580]     Audit-Id: dc71303b-3f30-4efd-bb89-7afb5c616809
	I0706 20:57:42.547000    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:42.548997    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1110","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5362 chars]
	I0706 20:57:43.028630    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:43.028630    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:43.028630    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:43.028630    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:43.033271    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:43.033271    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:43.033271    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:43.033271    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:43 GMT
	I0706 20:57:43.033271    8620 round_trippers.go:580]     Audit-Id: 27c557d7-851e-4234-93d4-50345f447fdf
	I0706 20:57:43.033271    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:43.033271    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:43.033271    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:43.034041    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:43.034627    8620 node_ready.go:49] node "multinode-144300" has status "Ready":"True"
	I0706 20:57:43.034627    8620 node_ready.go:38] duration metric: took 4.0155799s waiting for node "multinode-144300" to be "Ready" ...
	I0706 20:57:43.034627    8620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 20:57:43.034767    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods
	I0706 20:57:43.034904    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:43.034904    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:43.034904    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:43.039165    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:43.039165    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:43.039165    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:43 GMT
	I0706 20:57:43.039165    8620 round_trippers.go:580]     Audit-Id: d6513d14-94da-49de-83f1-3420e9571fe6
	I0706 20:57:43.039165    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:43.039165    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:43.039927    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:43.039927    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:43.041116    8620 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1209"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83220 chars]
	I0706 20:57:43.046139    8620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:43.046932    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:43.046932    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:43.046932    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:43.046932    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:43.051963    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:43.051963    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:43.051963    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:43.051963    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:43 GMT
	I0706 20:57:43.052373    8620 round_trippers.go:580]     Audit-Id: 43970f1b-8f1f-4662-aa57-527a026d2e27
	I0706 20:57:43.052373    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:43.052373    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:43.052373    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:43.052610    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:43.053341    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:43.053341    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:43.053472    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:43.053472    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:43.059683    8620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:57:43.059683    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:43.059683    8620 round_trippers.go:580]     Audit-Id: 61089bd3-cc66-4820-bb65-35d366e3bf79
	I0706 20:57:43.059683    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:43.059683    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:43.059683    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:43.059683    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:43.059683    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:43 GMT
	I0706 20:57:43.060229    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:43.575439    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:43.575439    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:43.575439    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:43.575439    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:43.580046    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:43.580046    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:43.580046    8620 round_trippers.go:580]     Audit-Id: 6c9ceafa-185c-4b57-a7b1-7818f1d14336
	I0706 20:57:43.580542    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:43.580542    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:43.580542    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:43.580542    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:43.580542    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:43 GMT
	I0706 20:57:43.580658    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:43.581471    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:43.581577    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:43.581577    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:43.581577    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:43.586759    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:43.586759    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:43.586759    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:43.586759    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:43.586871    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:43.586871    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:43.586871    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:43 GMT
	I0706 20:57:43.586924    8620 round_trippers.go:580]     Audit-Id: ef70a668-5c47-4cde-8d07-d266111b6300
	I0706 20:57:43.587251    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:44.073085    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:44.073085    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:44.073085    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:44.073301    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:44.077388    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:44.077388    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:44.077471    8620 round_trippers.go:580]     Audit-Id: 7574d0bb-6e6e-456d-8c68-bf1913e4a92a
	I0706 20:57:44.077471    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:44.077471    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:44.077471    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:44.077471    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:44.077471    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:44 GMT
	I0706 20:57:44.077471    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:44.078284    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:44.078284    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:44.078284    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:44.078284    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:44.080874    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:44.081384    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:44.081384    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:44.081384    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:44 GMT
	I0706 20:57:44.081447    8620 round_trippers.go:580]     Audit-Id: 885ed116-341c-4e92-b541-74da74367769
	I0706 20:57:44.081447    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:44.081447    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:44.081447    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:44.081447    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:44.570679    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:44.570679    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:44.570679    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:44.570679    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:44.575321    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:44.576319    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:44.576319    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:44.576319    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:44 GMT
	I0706 20:57:44.576319    8620 round_trippers.go:580]     Audit-Id: af9998d4-c595-4ae6-81f1-852cfc67535a
	I0706 20:57:44.576319    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:44.576319    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:44.576319    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:44.576573    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:44.577271    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:44.577271    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:44.577271    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:44.577271    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:44.580624    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:44.580624    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:44.580624    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:44 GMT
	I0706 20:57:44.580624    8620 round_trippers.go:580]     Audit-Id: 0fa1b0c7-3d70-4440-8109-eecd41682998
	I0706 20:57:44.580624    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:44.580624    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:44.580884    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:44.580884    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:44.581441    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:45.071037    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:45.071037    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:45.071037    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:45.071037    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:45.075627    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:45.075627    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:45.075627    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:45.075819    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:45.075819    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:45.075819    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:45.075819    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:45 GMT
	I0706 20:57:45.075819    8620 round_trippers.go:580]     Audit-Id: 312bfb9a-50bd-4868-90c0-ab29e68b416e
	I0706 20:57:45.076080    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:45.076806    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:45.076806    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:45.076898    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:45.076898    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:45.079076    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:45.079076    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:45.079076    8620 round_trippers.go:580]     Audit-Id: 49a60544-2562-410a-aa84-3886cf4b9243
	I0706 20:57:45.079076    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:45.079076    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:45.079076    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:45.079076    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:45.079076    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:45 GMT
	I0706 20:57:45.080433    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:45.080890    8620 pod_ready.go:102] pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace has status "Ready":"False"
	I0706 20:57:45.571790    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:45.571919    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:45.571919    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:45.571919    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:45.575299    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:45.575299    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:45.575299    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:45.575299    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:45.575299    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:45 GMT
	I0706 20:57:45.575299    8620 round_trippers.go:580]     Audit-Id: a35f360a-2956-474f-a1f3-52cd2fdb41bf
	I0706 20:57:45.575299    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:45.575299    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:45.576038    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:45.577282    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:45.577355    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:45.577355    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:45.577355    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:45.580554    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:45.580554    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:45.580554    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:45.580554    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:45.580554    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:45 GMT
	I0706 20:57:45.580769    8620 round_trippers.go:580]     Audit-Id: cd3ac369-3dc3-4008-b7ea-f84645ee180b
	I0706 20:57:45.580769    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:45.580812    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:45.581036    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:46.076783    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:46.076783    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:46.076881    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:46.076881    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:46.087582    8620 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0706 20:57:46.087582    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:46.087582    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:46.087582    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:46.088076    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:46.088076    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:46 GMT
	I0706 20:57:46.088076    8620 round_trippers.go:580]     Audit-Id: e26f9b74-38a2-4217-8bd5-09b3141aa570
	I0706 20:57:46.088076    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:46.088298    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:46.088864    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:46.088864    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:46.088864    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:46.088864    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:46.092460    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:46.092460    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:46.092460    8620 round_trippers.go:580]     Audit-Id: 1927945b-3f77-4c25-ab1e-84065af5bd27
	I0706 20:57:46.093119    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:46.093119    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:46.093119    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:46.093119    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:46.093119    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:46 GMT
	I0706 20:57:46.093387    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:46.566613    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:46.566613    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:46.566613    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:46.566613    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:46.570403    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:46.570801    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:46.570801    8620 round_trippers.go:580]     Audit-Id: 29f21953-74eb-4284-be85-8e3b85ea5686
	I0706 20:57:46.570801    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:46.570801    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:46.570801    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:46.570801    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:46.570801    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:46 GMT
	I0706 20:57:46.571009    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:46.572109    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:46.572109    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:46.572218    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:46.572218    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:46.575188    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:46.575188    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:46.575188    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:46.575188    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:46.575188    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:46 GMT
	I0706 20:57:46.575188    8620 round_trippers.go:580]     Audit-Id: 3676f21c-4d1d-4d37-a349-14ee769df3db
	I0706 20:57:46.575866    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:46.575866    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:46.576219    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:47.066454    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:47.066454    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:47.066454    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:47.066454    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:47.070102    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:47.070102    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:47.071141    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:47.071167    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:47 GMT
	I0706 20:57:47.071167    8620 round_trippers.go:580]     Audit-Id: 259cc863-8480-4895-bca8-c97e8ef37e7e
	I0706 20:57:47.071167    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:47.071167    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:47.071167    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:47.071237    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:47.072352    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:47.072352    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:47.072423    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:47.072423    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:47.075093    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:47.075093    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:47.075093    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:47.075921    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:47.075921    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:47 GMT
	I0706 20:57:47.075921    8620 round_trippers.go:580]     Audit-Id: 7ad0cc4f-c2f2-414e-befa-7745a54c0ef7
	I0706 20:57:47.075921    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:47.076002    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:47.076295    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:47.564768    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:47.564925    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:47.564925    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:47.564925    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:47.569284    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:47.569284    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:47.569284    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:47.569284    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:47 GMT
	I0706 20:57:47.569284    8620 round_trippers.go:580]     Audit-Id: d77eda0b-7f3a-47c7-84f5-aa337b9a9ace
	I0706 20:57:47.569284    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:47.569284    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:47.569574    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:47.569786    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:47.570456    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:47.570524    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:47.570524    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:47.570524    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:47.576278    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:47.576278    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:47.576278    8620 round_trippers.go:580]     Audit-Id: b0add2c3-982a-476e-b6ef-b5c35315d859
	I0706 20:57:47.576278    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:47.576278    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:47.576825    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:47.576825    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:47.576825    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:47 GMT
	I0706 20:57:47.577127    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:47.577901    8620 pod_ready.go:102] pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace has status "Ready":"False"
	I0706 20:57:48.067227    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:48.067227    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:48.067349    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:48.067349    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:48.070273    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:48.070273    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:48.070273    8620 round_trippers.go:580]     Audit-Id: df435ce7-41b0-4553-9ab0-1630f1430bbd
	I0706 20:57:48.070273    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:48.071157    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:48.071157    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:48.071157    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:48.071157    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:48 GMT
	I0706 20:57:48.071688    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:48.072254    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:48.072254    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:48.072254    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:48.072254    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:48.075712    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:48.075712    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:48.075712    8620 round_trippers.go:580]     Audit-Id: 30cb6eab-32a8-4044-ba68-f376175a7f0a
	I0706 20:57:48.075712    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:48.075712    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:48.075712    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:48.075849    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:48.075889    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:48 GMT
	I0706 20:57:48.075993    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:48.568578    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:48.568578    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:48.568578    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:48.568578    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:48.572158    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:48.572736    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:48.572736    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:48 GMT
	I0706 20:57:48.572736    8620 round_trippers.go:580]     Audit-Id: 63c1d722-93bb-4dac-9d5f-16c2371ed91b
	I0706 20:57:48.572736    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:48.572736    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:48.572820    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:48.572820    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:48.573017    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:48.573690    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:48.573690    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:48.573690    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:48.573690    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:48.576597    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:48.576597    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:48.576597    8620 round_trippers.go:580]     Audit-Id: 5aee20fe-af9b-4089-9309-a6a7e8a46074
	I0706 20:57:48.576597    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:48.577218    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:48.577218    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:48.577218    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:48.577218    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:48 GMT
	I0706 20:57:48.577844    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:49.068267    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:49.068267    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:49.068267    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:49.068267    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:49.073787    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:49.073989    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:49.073989    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:49 GMT
	I0706 20:57:49.073989    8620 round_trippers.go:580]     Audit-Id: 8299d0f7-54d7-4411-9b06-2c94985d733e
	I0706 20:57:49.073989    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:49.073989    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:49.073989    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:49.073989    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:49.074200    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:49.074919    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:49.074919    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:49.074919    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:49.074919    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:49.079187    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:49.079187    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:49.079187    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:49.079187    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:49.079187    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:49.079187    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:49.079733    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:49 GMT
	I0706 20:57:49.079733    8620 round_trippers.go:580]     Audit-Id: 388a6fb9-82bc-4bd6-aaea-0f4355605a78
	I0706 20:57:49.080050    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:49.564667    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:49.564728    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:49.564728    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:49.564728    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:49.570505    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:49.570505    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:49.570505    8620 round_trippers.go:580]     Audit-Id: f103a4cb-b8f6-4022-a5c6-0db7a689610b
	I0706 20:57:49.570505    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:49.570505    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:49.570505    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:49.570505    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:49.570505    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:49 GMT
	I0706 20:57:49.571762    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:49.572381    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:49.572998    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:49.572998    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:49.572998    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:49.576215    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:49.576215    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:49.576215    8620 round_trippers.go:580]     Audit-Id: 30ecf27c-ee08-413a-a40e-34ec5decd609
	I0706 20:57:49.576215    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:49.576215    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:49.576215    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:49.576215    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:49.576215    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:49 GMT
	I0706 20:57:49.576909    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:50.066310    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:50.066310    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:50.066310    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:50.066310    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:50.070060    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:50.070060    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:50.070060    8620 round_trippers.go:580]     Audit-Id: 6e7cd792-7847-462c-b2e4-3595058d0e24
	I0706 20:57:50.070060    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:50.070060    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:50.070060    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:50.070060    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:50.070060    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:50 GMT
	I0706 20:57:50.070897    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:50.071519    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:50.071519    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:50.071519    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:50.071519    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:50.074298    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:50.074298    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:50.074298    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:50.074298    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:50.075158    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:50 GMT
	I0706 20:57:50.075158    8620 round_trippers.go:580]     Audit-Id: b9352b8d-f22c-4b9a-b31f-0fe7de24c5b8
	I0706 20:57:50.075158    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:50.075158    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:50.075432    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:50.075786    8620 pod_ready.go:102] pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace has status "Ready":"False"
	I0706 20:57:50.575192    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:50.575192    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:50.575282    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:50.575282    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:50.578580    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:50.578580    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:50.579047    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:50.579047    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:50 GMT
	I0706 20:57:50.579047    8620 round_trippers.go:580]     Audit-Id: e2158fa9-1900-4285-b572-dc7626814026
	I0706 20:57:50.579047    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:50.579047    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:50.579047    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:50.579220    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:50.579793    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:50.579793    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:50.579793    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:50.579793    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:50.583095    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:50.583095    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:50.583095    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:50.583095    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:50.583095    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:50.583095    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:50 GMT
	I0706 20:57:50.583095    8620 round_trippers.go:580]     Audit-Id: d36a90e5-432b-4c4f-8f30-480e41cedae4
	I0706 20:57:50.583095    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:50.583095    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:51.065262    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:51.065321    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:51.065321    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:51.065429    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:51.068820    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:51.068820    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:51.068820    8620 round_trippers.go:580]     Audit-Id: 54c05ec1-97ad-4f19-ad4d-f106f97624a4
	I0706 20:57:51.068820    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:51.068820    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:51.068820    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:51.069791    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:51.069791    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:51 GMT
	I0706 20:57:51.070120    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:51.070872    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:51.070920    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:51.070920    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:51.070970    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:51.085892    8620 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0706 20:57:51.085892    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:51.085892    8620 round_trippers.go:580]     Audit-Id: 9843e4b3-59b3-4a93-94a6-1f1f00d6b5d5
	I0706 20:57:51.085892    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:51.086061    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:51.086061    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:51.086061    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:51.086061    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:51 GMT
	I0706 20:57:51.086285    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:51.567279    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:51.567571    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:51.567571    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:51.567571    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:51.573289    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:51.573289    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:51.573289    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:51.573289    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:51.573289    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:51.573289    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:51 GMT
	I0706 20:57:51.573289    8620 round_trippers.go:580]     Audit-Id: 7eb1533f-e71f-4f08-a4a7-d0ad0484db4c
	I0706 20:57:51.573289    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:51.574115    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:51.575334    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:51.575445    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:51.575445    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:51.575535    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:51.578208    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:51.578208    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:51.578208    8620 round_trippers.go:580]     Audit-Id: 6bcfa2dc-fb43-4c09-b361-5ba205fe458f
	I0706 20:57:51.578208    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:51.578208    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:51.578208    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:51.578208    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:51.578208    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:51 GMT
	I0706 20:57:51.579225    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:52.066231    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:52.066351    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.066351    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.066351    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.072463    8620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:57:52.072463    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.072463    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.072463    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.072463    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.072463    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.072463    8620 round_trippers.go:580]     Audit-Id: d098f9c0-d2ca-4e1c-a2f9-8c350181976c
	I0706 20:57:52.072463    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.072463    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1118","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6543 chars]
	I0706 20:57:52.073245    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:52.073245    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.073796    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.073796    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.077307    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:52.077444    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.077444    8620 round_trippers.go:580]     Audit-Id: 284e76e6-2c66-4ac7-b3ab-e437f5a02315
	I0706 20:57:52.077444    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.077444    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.077525    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.077525    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.077525    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.077860    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:52.078318    8620 pod_ready.go:102] pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace has status "Ready":"False"
	I0706 20:57:52.569461    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:57:52.569461    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.569550    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.569550    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.573381    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:52.573381    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.573381    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.573381    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.573381    8620 round_trippers.go:580]     Audit-Id: b76377f4-3ad9-4c2c-b053-8ddc54c5164e
	I0706 20:57:52.573899    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.573899    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.573899    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.574062    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1242","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6491 chars]
	I0706 20:57:52.574831    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:52.574831    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.574831    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.574929    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.577765    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:52.577765    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.577765    8620 round_trippers.go:580]     Audit-Id: a621ac6a-38dc-48cd-a518-4763bc3c12c0
	I0706 20:57:52.578180    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.578180    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.578239    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.578239    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.578312    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.578401    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:52.578966    8620 pod_ready.go:92] pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace has status "Ready":"True"
	I0706 20:57:52.578966    8620 pod_ready.go:81] duration metric: took 9.5327572s waiting for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.578966    8620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.579102    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-144300
	I0706 20:57:52.579142    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.579142    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.579142    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.582030    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:52.582030    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.582030    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.582030    8620 round_trippers.go:580]     Audit-Id: 665ba4d2-f16e-432f-9a9b-1eb7141a74e6
	I0706 20:57:52.582030    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.582030    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.582030    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.582030    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.582030    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-144300","namespace":"kube-system","uid":"3cf71374-8b9f-4bee-a5a7-538dcf09ed5e","resourceVersion":"1211","creationTimestamp":"2023-07-06T20:57:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.78.0:2379","kubernetes.io/config.hash":"fcb31617214a528ab159c24a1103b7af","kubernetes.io/config.mirror":"fcb31617214a528ab159c24a1103b7af","kubernetes.io/config.seen":"2023-07-06T20:57:27.010845433Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5843 chars]
	I0706 20:57:52.582735    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:52.582735    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.582735    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.582735    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.586132    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:52.586172    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.586172    8620 round_trippers.go:580]     Audit-Id: 35f5a695-9c6f-48b0-b6fa-b7c95dbee09e
	I0706 20:57:52.586172    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.586172    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.586172    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.586172    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.586172    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.586172    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:52.587098    8620 pod_ready.go:92] pod "etcd-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:57:52.587098    8620 pod_ready.go:81] duration metric: took 8.1323ms waiting for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.587181    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.587376    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-144300
	I0706 20:57:52.587376    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.587376    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.587437    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.590525    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:52.590525    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.590525    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.590525    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.590525    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.590525    8620 round_trippers.go:580]     Audit-Id: a3500f17-280b-4151-8b8d-567b0dc6c143
	I0706 20:57:52.590525    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.590525    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.590858    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-144300","namespace":"kube-system","uid":"c3e05753-1404-4779-b0dd-d7bf63b44bdd","resourceVersion":"1205","creationTimestamp":"2023-07-06T20:57:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.78.0:8443","kubernetes.io/config.hash":"39dd3fec037ddc9365ef4418fd161ea0","kubernetes.io/config.mirror":"39dd3fec037ddc9365ef4418fd161ea0","kubernetes.io/config.seen":"2023-07-06T20:57:27.010850733Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7382 chars]
	I0706 20:57:52.591578    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:52.591578    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.591578    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.591578    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.594176    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:52.594176    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.594176    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.594176    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.594876    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.594876    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.594876    8620 round_trippers.go:580]     Audit-Id: 3a3e8163-7149-40eb-8da8-e2d01517c6e5
	I0706 20:57:52.594876    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.595300    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:52.595451    8620 pod_ready.go:92] pod "kube-apiserver-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:57:52.595451    8620 pod_ready.go:81] duration metric: took 8.2056ms waiting for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.595451    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.595451    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-144300
	I0706 20:57:52.595451    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.595451    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.595451    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.598349    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:52.598349    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.598349    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.598349    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.598349    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.599150    8620 round_trippers.go:580]     Audit-Id: 87d72344-083a-42d9-a9b6-13b1e4da355d
	I0706 20:57:52.599150    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.599150    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.599424    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-144300","namespace":"kube-system","uid":"d9a60269-68e9-4ea2-82fe-63cedee225ef","resourceVersion":"1214","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.mirror":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.seen":"2023-07-06T20:46:36.035686687Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7167 chars]
	I0706 20:57:52.600089    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:52.600089    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.600089    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.600089    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.602250    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:52.602250    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.602250    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.602250    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.603052    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.603052    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.603052    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.603095    8620 round_trippers.go:580]     Audit-Id: d32656eb-d07d-49a2-9353-1988f617ce81
	I0706 20:57:52.603126    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:52.603708    8620 pod_ready.go:92] pod "kube-controller-manager-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:57:52.603789    8620 pod_ready.go:81] duration metric: took 8.3381ms waiting for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.603789    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5vmt" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.603789    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5vmt
	I0706 20:57:52.603789    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.603789    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.603789    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.606801    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:52.607091    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.607091    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.607150    8620 round_trippers.go:580]     Audit-Id: 411f5857-cf40-4876-9c43-2e1da84e19bc
	I0706 20:57:52.607150    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.607150    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.607211    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.607211    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.607246    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f5vmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"e615de7b-b4a0-4060-aecd-0581b032227d","resourceVersion":"567","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5541 chars]
	I0706 20:57:52.607992    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:57:52.607992    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.607992    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.607992    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.610632    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:52.610632    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.611456    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.611456    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.611456    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.611456    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.611456    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.611456    8620 round_trippers.go:580]     Audit-Id: d7cd0357-2ec7-44d2-a882-c1485694e024
	I0706 20:57:52.611719    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"39c558c0-4469-4217-b2ec-656fe02ca858","resourceVersion":"963","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 3266 chars]
	I0706 20:57:52.611719    8620 pod_ready.go:92] pod "kube-proxy-f5vmt" in "kube-system" namespace has status "Ready":"True"
	I0706 20:57:52.611719    8620 pod_ready.go:81] duration metric: took 7.9302ms waiting for pod "kube-proxy-f5vmt" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.611719    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.773444    8620 request.go:628] Waited for 161.603ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 20:57:52.773585    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 20:57:52.773616    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.773616    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.773616    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.778563    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:57:52.778563    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.778563    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.778563    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.778563    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.778563    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.778563    8620 round_trippers.go:580]     Audit-Id: 71a0733e-33a4-4ab1-a6ca-faa1cc3517d2
	I0706 20:57:52.778563    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.778563    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h6h62","generateName":"kube-proxy-","namespace":"kube-system","uid":"6949ff1e-f5c0-4ab2-ae7f-6b30775e220d","resourceVersion":"1170","creationTimestamp":"2023-07-06T20:46:49Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0706 20:57:52.979230    8620 request.go:628] Waited for 199.8352ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:52.979230    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:52.979230    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:52.979230    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:52.979230    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:52.983073    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:52.983478    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:52.983567    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:52.983567    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:52 GMT
	I0706 20:57:52.983567    8620 round_trippers.go:580]     Audit-Id: 16eb59db-bc83-40c4-ab32-f84b0604ce4f
	I0706 20:57:52.983567    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:52.983567    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:52.983567    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:52.983567    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:52.984294    8620 pod_ready.go:92] pod "kube-proxy-h6h62" in "kube-system" namespace has status "Ready":"True"
	I0706 20:57:52.984294    8620 pod_ready.go:81] duration metric: took 372.5716ms waiting for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:52.984294    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x7bwf" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:53.180986    8620 request.go:628] Waited for 196.6909ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7bwf
	I0706 20:57:53.181616    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7bwf
	I0706 20:57:53.181616    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:53.181616    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:53.181616    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:53.185298    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:53.185298    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:53.185298    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:53.185298    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:53.185697    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:53.185697    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:53.185697    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:53 GMT
	I0706 20:57:53.185697    8620 round_trippers.go:580]     Audit-Id: f0e8bb2d-0379-45dc-bbc4-ef109ee55472
	I0706 20:57:53.185951    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x7bwf","generateName":"kube-proxy-","namespace":"kube-system","uid":"3326b20f-277b-435c-8b7e-7d305167affb","resourceVersion":"1074","creationTimestamp":"2023-07-06T20:50:55Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:50:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0706 20:57:53.369547    8620 request.go:628] Waited for 182.9672ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 20:57:53.369705    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 20:57:53.369705    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:53.369705    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:53.369803    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:53.372121    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:57:53.373179    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:53.373179    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:53.373235    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:53.373235    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:53.373235    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:53.373235    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:53 GMT
	I0706 20:57:53.373235    8620 round_trippers.go:580]     Audit-Id: f80c1d13-4129-4cf6-beb7-69204e312a40
	I0706 20:57:53.373235    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"9147f70e-3f8f-4f6c-98f8-6e9530ca9678","resourceVersion":"1089","creationTimestamp":"2023-07-06T20:55:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3084 chars]
	I0706 20:57:53.373786    8620 pod_ready.go:92] pod "kube-proxy-x7bwf" in "kube-system" namespace has status "Ready":"True"
	I0706 20:57:53.373786    8620 pod_ready.go:81] duration metric: took 389.4896ms waiting for pod "kube-proxy-x7bwf" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:53.373961    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:53.581186    8620 request.go:628] Waited for 207.0528ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 20:57:53.581275    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 20:57:53.581275    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:53.581275    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:53.581275    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:53.586726    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:53.586750    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:53.586750    8620 round_trippers.go:580]     Audit-Id: 3d2aa0e6-b505-427d-ae73-11649efce7f6
	I0706 20:57:53.586750    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:53.586920    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:53.586920    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:53.586920    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:53.586920    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:53 GMT
	I0706 20:57:53.587113    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-144300","namespace":"kube-system","uid":"70e904dd-fca0-436e-84d9-101fbc1cd9b0","resourceVersion":"1227","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.mirror":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.seen":"2023-07-06T20:46:36.035687887Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4897 chars]
	I0706 20:57:53.769887    8620 request.go:628] Waited for 181.4895ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:53.769887    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:57:53.769887    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:53.769887    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:53.769887    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:53.773531    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:53.774070    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:53.774070    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:53.774070    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:53.774070    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:53.774070    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:53.774070    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:53 GMT
	I0706 20:57:53.774070    8620 round_trippers.go:580]     Audit-Id: aa09d853-08ce-488a-b8ce-b5515f408408
	I0706 20:57:53.774070    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:57:53.774856    8620 pod_ready.go:92] pod "kube-scheduler-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:57:53.774856    8620 pod_ready.go:81] duration metric: took 400.8915ms waiting for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:57:53.774957    8620 pod_ready.go:38] duration metric: took 10.7401124s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 20:57:53.774957    8620 api_server.go:52] waiting for apiserver process to appear ...
	I0706 20:57:53.784422    8620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:57:53.809238    8620 command_runner.go:130] > 1884
	I0706 20:57:53.809238    8620 api_server.go:72] duration metric: took 14.9287066s to wait for apiserver process to appear ...
	I0706 20:57:53.809238    8620 api_server.go:88] waiting for apiserver healthz status ...
	I0706 20:57:53.809238    8620 api_server.go:253] Checking apiserver healthz at https://172.29.78.0:8443/healthz ...
	I0706 20:57:53.817283    8620 api_server.go:279] https://172.29.78.0:8443/healthz returned 200:
	ok
	I0706 20:57:53.818225    8620 round_trippers.go:463] GET https://172.29.78.0:8443/version
	I0706 20:57:53.818225    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:53.818225    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:53.818225    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:53.820631    8620 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0706 20:57:53.820631    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:53.820631    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:53.820631    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:53.820631    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:53.820631    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:53.820631    8620 round_trippers.go:580]     Content-Length: 263
	I0706 20:57:53.820631    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:53 GMT
	I0706 20:57:53.820631    8620 round_trippers.go:580]     Audit-Id: 297d4199-7c51-409e-be67-37f722a867e7
	I0706 20:57:53.820631    8620 request.go:1188] Response Body: {
	  "major": "1",
	  "minor": "27",
	  "gitVersion": "v1.27.3",
	  "gitCommit": "25b4e43193bcda6c7328a6d147b1fb73a33f1598",
	  "gitTreeState": "clean",
	  "buildDate": "2023-06-14T09:47:40Z",
	  "goVersion": "go1.20.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0706 20:57:53.820631    8620 api_server.go:141] control plane version: v1.27.3
	I0706 20:57:53.820631    8620 api_server.go:131] duration metric: took 11.3936ms to wait for apiserver health ...
	I0706 20:57:53.820631    8620 system_pods.go:43] waiting for kube-system pods to appear ...
	I0706 20:57:53.969673    8620 request.go:628] Waited for 148.798ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods
	I0706 20:57:53.969822    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods
	I0706 20:57:53.969822    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:53.969822    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:53.969822    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:53.975395    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:53.975395    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:53.975700    8620 round_trippers.go:580]     Audit-Id: 479bbae6-97e2-4110-bd3d-bd099c2649bd
	I0706 20:57:53.975700    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:53.975700    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:53.975700    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:53.975700    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:53.975700    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:53 GMT
	I0706 20:57:53.978376    8620 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1248"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1242","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82438 chars]
	I0706 20:57:53.982158    8620 system_pods.go:59] 12 kube-system pods found
	I0706 20:57:53.982158    8620 system_pods.go:61] "coredns-5d78c9869d-m7j99" [dfa019d5-9528-4f25-8aab-03d1d276bb0c] Running
	I0706 20:57:53.982158    8620 system_pods.go:61] "etcd-multinode-144300" [3cf71374-8b9f-4bee-a5a7-538dcf09ed5e] Running
	I0706 20:57:53.982158    8620 system_pods.go:61] "kindnet-9pjnm" [85523421-1320-4587-ba8c-cbb357ee7eb1] Running
	I0706 20:57:53.982158    8620 system_pods.go:61] "kindnet-jhjpn" [873ba5ea-0975-4046-ac70-7f652703f7c6] Running
	I0706 20:57:53.982158    8620 system_pods.go:61] "kindnet-z6sjf" [c2828b0f-72bb-4203-ab44-280e4de85926] Running
	I0706 20:57:53.982241    8620 system_pods.go:61] "kube-apiserver-multinode-144300" [c3e05753-1404-4779-b0dd-d7bf63b44bdd] Running
	I0706 20:57:53.982241    8620 system_pods.go:61] "kube-controller-manager-multinode-144300" [d9a60269-68e9-4ea2-82fe-63cedee225ef] Running
	I0706 20:57:53.982241    8620 system_pods.go:61] "kube-proxy-f5vmt" [e615de7b-b4a0-4060-aecd-0581b032227d] Running
	I0706 20:57:53.982241    8620 system_pods.go:61] "kube-proxy-h6h62" [6949ff1e-f5c0-4ab2-ae7f-6b30775e220d] Running
	I0706 20:57:53.982289    8620 system_pods.go:61] "kube-proxy-x7bwf" [3326b20f-277b-435c-8b7e-7d305167affb] Running
	I0706 20:57:53.982289    8620 system_pods.go:61] "kube-scheduler-multinode-144300" [70e904dd-fca0-436e-84d9-101fbc1cd9b0] Running
	I0706 20:57:53.982306    8620 system_pods.go:61] "storage-provisioner" [75b208e7-5f24-4849-867c-c7fa45213999] Running
	I0706 20:57:53.982306    8620 system_pods.go:74] duration metric: took 161.6739ms to wait for pod list to return data ...
	I0706 20:57:53.982306    8620 default_sa.go:34] waiting for default service account to be created ...
	I0706 20:57:54.173522    8620 request.go:628] Waited for 190.8912ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/default/serviceaccounts
	I0706 20:57:54.173861    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/default/serviceaccounts
	I0706 20:57:54.173861    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:54.173861    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:54.173861    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:54.177527    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:57:54.177711    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:54.177868    8620 round_trippers.go:580]     Audit-Id: 3b4a0466-83c2-49b2-8134-dec7a8221344
	I0706 20:57:54.177949    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:54.177949    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:54.178055    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:54.178104    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:54.178104    8620 round_trippers.go:580]     Content-Length: 262
	I0706 20:57:54.178104    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:54 GMT
	I0706 20:57:54.178104    8620 request.go:1188] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1248"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"47609419-2a68-437e-86dd-3015903126c0","resourceVersion":"333","creationTimestamp":"2023-07-06T20:46:48Z"}}]}
	I0706 20:57:54.178104    8620 default_sa.go:45] found service account: "default"
	I0706 20:57:54.178104    8620 default_sa.go:55] duration metric: took 195.7964ms for default service account to be created ...
	I0706 20:57:54.178104    8620 system_pods.go:116] waiting for k8s-apps to be running ...
	I0706 20:57:54.377274    8620 request.go:628] Waited for 198.9724ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods
	I0706 20:57:54.377553    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods
	I0706 20:57:54.377697    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:54.377697    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:54.377697    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:54.383993    8620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:57:54.383993    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:54.384269    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:54 GMT
	I0706 20:57:54.384269    8620 round_trippers.go:580]     Audit-Id: 11862450-aa2a-4464-9aaa-f28f09a82844
	I0706 20:57:54.384269    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:54.384269    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:54.384269    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:54.384269    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:54.385728    8620 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1248"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1242","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82438 chars]
	I0706 20:57:54.388653    8620 system_pods.go:86] 12 kube-system pods found
	I0706 20:57:54.389189    8620 system_pods.go:89] "coredns-5d78c9869d-m7j99" [dfa019d5-9528-4f25-8aab-03d1d276bb0c] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "etcd-multinode-144300" [3cf71374-8b9f-4bee-a5a7-538dcf09ed5e] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "kindnet-9pjnm" [85523421-1320-4587-ba8c-cbb357ee7eb1] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "kindnet-jhjpn" [873ba5ea-0975-4046-ac70-7f652703f7c6] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "kindnet-z6sjf" [c2828b0f-72bb-4203-ab44-280e4de85926] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "kube-apiserver-multinode-144300" [c3e05753-1404-4779-b0dd-d7bf63b44bdd] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "kube-controller-manager-multinode-144300" [d9a60269-68e9-4ea2-82fe-63cedee225ef] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "kube-proxy-f5vmt" [e615de7b-b4a0-4060-aecd-0581b032227d] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "kube-proxy-h6h62" [6949ff1e-f5c0-4ab2-ae7f-6b30775e220d] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "kube-proxy-x7bwf" [3326b20f-277b-435c-8b7e-7d305167affb] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "kube-scheduler-multinode-144300" [70e904dd-fca0-436e-84d9-101fbc1cd9b0] Running
	I0706 20:57:54.389189    8620 system_pods.go:89] "storage-provisioner" [75b208e7-5f24-4849-867c-c7fa45213999] Running
	I0706 20:57:54.389189    8620 system_pods.go:126] duration metric: took 211.0829ms to wait for k8s-apps to be running ...
	I0706 20:57:54.389361    8620 system_svc.go:44] waiting for kubelet service to be running ....
	I0706 20:57:54.397656    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:57:54.418976    8620 system_svc.go:56] duration metric: took 29.7872ms WaitForService to wait for kubelet.
	I0706 20:57:54.419098    8620 kubeadm.go:581] duration metric: took 15.5385628s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0706 20:57:54.419098    8620 node_conditions.go:102] verifying NodePressure condition ...
	I0706 20:57:54.579327    8620 request.go:628] Waited for 159.966ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes
	I0706 20:57:54.579454    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes
	I0706 20:57:54.579454    8620 round_trippers.go:469] Request Headers:
	I0706 20:57:54.579454    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:57:54.579454    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:57:54.584971    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:57:54.585840    8620 round_trippers.go:577] Response Headers:
	I0706 20:57:54.585840    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:57:54.585840    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:57:54.585840    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:57:54 GMT
	I0706 20:57:54.585840    8620 round_trippers.go:580]     Audit-Id: f771f4dd-65a0-425c-822f-50647e27a757
	I0706 20:57:54.585931    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:57:54.585931    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:57:54.586308    8620 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1248"},"items":[{"metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 13623 chars]
	I0706 20:57:54.587281    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:57:54.587390    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 20:57:54.587390    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:57:54.587390    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 20:57:54.587390    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:57:54.587390    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 20:57:54.587390    8620 node_conditions.go:105] duration metric: took 168.2906ms to run NodePressure ...
	I0706 20:57:54.587464    8620 start.go:228] waiting for startup goroutines ...
	I0706 20:57:54.587464    8620 start.go:233] waiting for cluster config update ...
	I0706 20:57:54.587464    8620 start.go:242] writing updated cluster config ...
	I0706 20:57:54.605705    8620 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:57:54.606042    8620 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:57:54.611893    8620 out.go:177] * Starting worker node multinode-144300-m02 in cluster multinode-144300
	I0706 20:57:54.614885    8620 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 20:57:54.614885    8620 cache.go:57] Caching tarball of preloaded images
	I0706 20:57:54.614885    8620 preload.go:174] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0706 20:57:54.614885    8620 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 20:57:54.614885    8620 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:57:54.617170    8620 start.go:365] acquiring machines lock for multinode-144300-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 20:57:54.618173    8620 start.go:369] acquired machines lock for "multinode-144300-m02" in 1.0033ms
	I0706 20:57:54.618173    8620 start.go:96] Skipping create...Using existing machine configuration
	I0706 20:57:54.618173    8620 fix.go:54] fixHost starting: m02
	I0706 20:57:54.618173    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:57:55.274127    8620 main.go:141] libmachine: [stdout =====>] : Off
	
	I0706 20:57:55.274260    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:55.274379    8620 fix.go:102] recreateIfNeeded on multinode-144300-m02: state=Stopped err=<nil>
	W0706 20:57:55.274379    8620 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 20:57:55.280616    8620 out.go:177] * Restarting existing hyperv VM for "multinode-144300-m02" ...
	I0706 20:57:55.282962    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-144300-m02
	I0706 20:57:56.767253    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:57:56.767253    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:56.767253    8620 main.go:141] libmachine: Waiting for host to start...
	I0706 20:57:56.767253    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:57:57.429603    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:57:57.429802    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:57.429802    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:57:58.366797    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:57:58.367004    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:57:59.377601    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:00.040243    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:00.040243    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:00.040552    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:00.972258    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:58:00.972258    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:01.976626    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:02.642418    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:02.642418    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:02.642536    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:03.586090    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:58:03.586343    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:04.591279    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:05.287889    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:05.288368    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:05.288368    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:06.242042    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:58:06.242042    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:07.243594    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:07.922260    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:07.922311    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:07.922361    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:08.903103    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:58:08.903151    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:09.916298    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:10.614669    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:10.614867    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:10.614867    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:11.595951    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:58:11.596003    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:12.596259    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:13.260192    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:13.260253    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:13.260407    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:14.194550    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:58:14.194774    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:15.205190    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:15.871596    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:15.871802    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:15.871905    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:16.807531    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:58:16.807835    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:17.809754    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:18.478521    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:18.478842    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:18.478928    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:19.571491    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:19.571686    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:19.574000    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:20.256241    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:20.256241    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:20.256241    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:21.217645    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:21.217896    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:21.218449    8620 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:58:21.220643    8620 machine.go:88] provisioning docker machine ...
	I0706 20:58:21.220717    8620 buildroot.go:166] provisioning hostname "multinode-144300-m02"
	I0706 20:58:21.220717    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:21.886798    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:21.887044    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:21.887127    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:22.835549    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:22.835706    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:22.839674    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:58:22.840478    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.74.65 22 <nil> <nil>}
	I0706 20:58:22.840478    8620 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-144300-m02 && echo "multinode-144300-m02" | sudo tee /etc/hostname
	I0706 20:58:23.001028    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-144300-m02
	
	I0706 20:58:23.001131    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:23.672364    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:23.672364    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:23.672465    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:24.648742    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:24.648823    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:24.652564    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:58:24.653609    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.74.65 22 <nil> <nil>}
	I0706 20:58:24.653609    8620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-144300-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-144300-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-144300-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 20:58:24.804746    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 20:58:24.804802    8620 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0706 20:58:24.804802    8620 buildroot.go:174] setting up certificates
	I0706 20:58:24.804802    8620 provision.go:83] configureAuth start
	I0706 20:58:24.804802    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:25.481692    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:25.481692    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:25.481692    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:26.461417    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:26.461417    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:26.461554    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:27.127404    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:27.127802    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:27.127842    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:28.108648    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:28.108706    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:28.108706    8620 provision.go:138] copyHostCerts
	I0706 20:58:28.108921    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0706 20:58:28.108969    8620 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0706 20:58:28.108969    8620 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0706 20:58:28.109506    8620 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0706 20:58:28.110349    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0706 20:58:28.110349    8620 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0706 20:58:28.110349    8620 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0706 20:58:28.111189    8620 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0706 20:58:28.112245    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0706 20:58:28.112357    8620 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0706 20:58:28.112357    8620 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0706 20:58:28.112357    8620 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0706 20:58:28.113685    8620 provision.go:112] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-144300-m02 san=[172.29.74.65 172.29.74.65 localhost 127.0.0.1 minikube multinode-144300-m02]
	I0706 20:58:28.211358    8620 provision.go:172] copyRemoteCerts
	I0706 20:58:28.220353    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 20:58:28.220353    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:28.869498    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:28.869498    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:28.869498    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:29.819204    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:29.819364    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:29.819517    8620 sshutil.go:53] new ssh client: &{IP:172.29.74.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\id_rsa Username:docker}
	I0706 20:58:29.926060    8620 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.7056946s)
	I0706 20:58:29.926060    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0706 20:58:29.926060    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0706 20:58:29.962598    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0706 20:58:29.962598    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0706 20:58:29.996000    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0706 20:58:29.996000    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 20:58:30.031783    8620 provision.go:86] duration metric: configureAuth took 5.2269428s
	I0706 20:58:30.031783    8620 buildroot.go:189] setting minikube options for container-runtime
	I0706 20:58:30.032581    8620 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:58:30.032652    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:30.718838    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:30.718838    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:30.718838    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:31.689109    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:31.689109    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:31.693431    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:58:31.693938    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.74.65 22 <nil> <nil>}
	I0706 20:58:31.693938    8620 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 20:58:31.839078    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 20:58:31.839078    8620 buildroot.go:70] root file system type: tmpfs
	I0706 20:58:31.839362    8620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 20:58:31.839420    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:32.528264    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:32.528264    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:32.528365    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:33.479572    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:33.479890    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:33.483168    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:58:33.484391    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.74.65 22 <nil> <nil>}
	I0706 20:58:33.484531    8620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.78.0"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 20:58:33.644665    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.78.0
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 20:58:33.644721    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:34.299000    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:34.299300    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:34.299300    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:35.276511    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:35.276511    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:35.280338    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:58:35.281144    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.74.65 22 <nil> <nil>}
	I0706 20:58:35.281218    8620 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 20:58:36.428564    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 20:58:36.428624    8620 machine.go:91] provisioned docker machine in 15.2077962s
	I0706 20:58:36.428624    8620 start.go:300] post-start starting for "multinode-144300-m02" (driver="hyperv")
	I0706 20:58:36.428707    8620 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 20:58:36.438676    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 20:58:36.438676    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:37.074195    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:37.074195    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:37.074195    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:38.049436    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:38.049698    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:38.050026    8620 sshutil.go:53] new ssh client: &{IP:172.29.74.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\id_rsa Username:docker}
	I0706 20:58:38.159189    8620 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.7205003s)
	I0706 20:58:38.169544    8620 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 20:58:38.176371    8620 command_runner.go:130] > NAME=Buildroot
	I0706 20:58:38.176495    8620 command_runner.go:130] > VERSION=2021.02.12-1-g6f2898e-dirty
	I0706 20:58:38.176495    8620 command_runner.go:130] > ID=buildroot
	I0706 20:58:38.176495    8620 command_runner.go:130] > VERSION_ID=2021.02.12
	I0706 20:58:38.176495    8620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0706 20:58:38.176626    8620 info.go:137] Remote host: Buildroot 2021.02.12
	I0706 20:58:38.176626    8620 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0706 20:58:38.176851    8620 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0706 20:58:38.177972    8620 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> 82562.pem in /etc/ssl/certs
	I0706 20:58:38.177972    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /etc/ssl/certs/82562.pem
	I0706 20:58:38.187730    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 20:58:38.202082    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /etc/ssl/certs/82562.pem (1708 bytes)
	I0706 20:58:38.238981    8620 start.go:303] post-start completed in 1.810343s
	I0706 20:58:38.238981    8620 fix.go:56] fixHost completed within 43.6204889s
	I0706 20:58:38.238981    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:38.915582    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:38.915582    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:38.916212    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:39.879205    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:39.879445    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:39.883274    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:58:39.884299    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.74.65 22 <nil> <nil>}
	I0706 20:58:39.884406    8620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0706 20:58:40.027528    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688677120.027698780
	
	I0706 20:58:40.027528    8620 fix.go:206] guest clock: 1688677120.027698780
	I0706 20:58:40.027528    8620 fix.go:219] Guest: 2023-07-06 20:58:40.02769878 +0000 UTC Remote: 2023-07-06 20:58:38.238981 +0000 UTC m=+129.034794101 (delta=1.78871778s)
	I0706 20:58:40.027528    8620 fix.go:190] guest clock delta is within tolerance: 1.78871778s
	I0706 20:58:40.027528    8620 start.go:83] releasing machines lock for "multinode-144300-m02", held for 45.4090228s
	I0706 20:58:40.027528    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:40.697821    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:40.697821    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:40.697821    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:41.653178    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:41.653357    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:41.656355    8620 out.go:177] * Found network options:
	I0706 20:58:41.659282    8620 out.go:177]   - NO_PROXY=172.29.78.0
	W0706 20:58:41.661834    8620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0706 20:58:41.664331    8620 out.go:177]   - no_proxy=172.29.78.0
	W0706 20:58:41.666414    8620 proxy.go:119] fail to check proxy env: Error ip not in block
	W0706 20:58:41.667979    8620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0706 20:58:41.669000    8620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 20:58:41.669000    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:41.676710    8620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0706 20:58:41.676710    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:58:42.395318    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:42.395488    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:42.395318    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:42.395726    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:42.395726    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:42.395488    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:43.478685    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:43.479037    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:43.479216    8620 sshutil.go:53] new ssh client: &{IP:172.29.74.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\id_rsa Username:docker}
	I0706 20:58:43.497098    8620 main.go:141] libmachine: [stdout =====>] : 172.29.74.65
	
	I0706 20:58:43.497098    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:43.497098    8620 sshutil.go:53] new ssh client: &{IP:172.29.74.65 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\id_rsa Username:docker}
	I0706 20:58:43.735270    8620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0706 20:58:43.736288    8620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0706 20:58:43.736288    8620 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (2.0595629s)
	I0706 20:58:43.736288    8620 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.067273s)
	W0706 20:58:43.736288    8620 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 20:58:43.747049    8620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0706 20:58:43.768873    8620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0706 20:58:43.768873    8620 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0706 20:58:43.768873    8620 start.go:466] detecting cgroup driver to use...
	I0706 20:58:43.768873    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 20:58:43.797696    8620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0706 20:58:43.807825    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0706 20:58:43.832208    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 20:58:43.846545    8620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 20:58:43.855789    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 20:58:43.881045    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 20:58:43.905583    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 20:58:43.929471    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 20:58:43.953623    8620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 20:58:43.979738    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 20:58:44.004944    8620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 20:58:44.019245    8620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0706 20:58:44.028895    8620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 20:58:44.053501    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:58:44.200028    8620 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 20:58:44.225203    8620 start.go:466] detecting cgroup driver to use...
	I0706 20:58:44.233863    8620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 20:58:44.251958    8620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0706 20:58:44.251958    8620 command_runner.go:130] > [Unit]
	I0706 20:58:44.252639    8620 command_runner.go:130] > Description=Docker Application Container Engine
	I0706 20:58:44.252639    8620 command_runner.go:130] > Documentation=https://docs.docker.com
	I0706 20:58:44.252639    8620 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0706 20:58:44.252639    8620 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0706 20:58:44.252716    8620 command_runner.go:130] > StartLimitBurst=3
	I0706 20:58:44.252716    8620 command_runner.go:130] > StartLimitIntervalSec=60
	I0706 20:58:44.252773    8620 command_runner.go:130] > [Service]
	I0706 20:58:44.252773    8620 command_runner.go:130] > Type=notify
	I0706 20:58:44.252773    8620 command_runner.go:130] > Restart=on-failure
	I0706 20:58:44.252773    8620 command_runner.go:130] > Environment=NO_PROXY=172.29.78.0
	I0706 20:58:44.252859    8620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0706 20:58:44.252859    8620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0706 20:58:44.252859    8620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0706 20:58:44.252954    8620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0706 20:58:44.252954    8620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0706 20:58:44.253003    8620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0706 20:58:44.253003    8620 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0706 20:58:44.253003    8620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0706 20:58:44.253003    8620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0706 20:58:44.253003    8620 command_runner.go:130] > ExecStart=
	I0706 20:58:44.253073    8620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0706 20:58:44.253073    8620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0706 20:58:44.253073    8620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0706 20:58:44.253073    8620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0706 20:58:44.253144    8620 command_runner.go:130] > LimitNOFILE=infinity
	I0706 20:58:44.253173    8620 command_runner.go:130] > LimitNPROC=infinity
	I0706 20:58:44.253173    8620 command_runner.go:130] > LimitCORE=infinity
	I0706 20:58:44.253173    8620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0706 20:58:44.253202    8620 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0706 20:58:44.253202    8620 command_runner.go:130] > TasksMax=infinity
	I0706 20:58:44.253202    8620 command_runner.go:130] > TimeoutStartSec=0
	I0706 20:58:44.253202    8620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0706 20:58:44.253202    8620 command_runner.go:130] > Delegate=yes
	I0706 20:58:44.253202    8620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0706 20:58:44.253202    8620 command_runner.go:130] > KillMode=process
	I0706 20:58:44.253202    8620 command_runner.go:130] > [Install]
	I0706 20:58:44.253202    8620 command_runner.go:130] > WantedBy=multi-user.target
	I0706 20:58:44.261538    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 20:58:44.284225    8620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 20:58:44.315337    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 20:58:44.341406    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 20:58:44.369630    8620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0706 20:58:44.426158    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 20:58:44.443144    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 20:58:44.469554    8620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0706 20:58:44.480019    8620 ssh_runner.go:195] Run: which cri-dockerd
	I0706 20:58:44.485553    8620 command_runner.go:130] > /usr/bin/cri-dockerd
	I0706 20:58:44.494229    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 20:58:44.508313    8620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 20:58:44.541035    8620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 20:58:44.683843    8620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 20:58:44.815683    8620 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 20:58:44.815814    8620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 20:58:44.850520    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:58:44.989845    8620 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 20:58:46.581503    8620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.5916463s)
	I0706 20:58:46.590492    8620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 20:58:46.741692    8620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0706 20:58:46.875838    8620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 20:58:47.033847    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:58:47.180600    8620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0706 20:58:47.212618    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:58:47.360483    8620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0706 20:58:47.452487    8620 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0706 20:58:47.462749    8620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0706 20:58:47.471785    8620 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0706 20:58:47.471785    8620 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0706 20:58:47.471852    8620 command_runner.go:130] > Device: 16h/22d	Inode: 911         Links: 1
	I0706 20:58:47.471852    8620 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0706 20:58:47.471852    8620 command_runner.go:130] > Access: 2023-07-06 20:58:47.379663825 +0000
	I0706 20:58:47.471852    8620 command_runner.go:130] > Modify: 2023-07-06 20:58:47.379663825 +0000
	I0706 20:58:47.471852    8620 command_runner.go:130] > Change: 2023-07-06 20:58:47.383664021 +0000
	I0706 20:58:47.471953    8620 command_runner.go:130] >  Birth: -
	I0706 20:58:47.471953    8620 start.go:534] Will wait 60s for crictl version
	I0706 20:58:47.481548    8620 ssh_runner.go:195] Run: which crictl
	I0706 20:58:47.485989    8620 command_runner.go:130] > /usr/bin/crictl
	I0706 20:58:47.493888    8620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0706 20:58:47.541784    8620 command_runner.go:130] > Version:  0.1.0
	I0706 20:58:47.541922    8620 command_runner.go:130] > RuntimeName:  docker
	I0706 20:58:47.541922    8620 command_runner.go:130] > RuntimeVersion:  24.0.2
	I0706 20:58:47.541922    8620 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0706 20:58:47.541996    8620 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0706 20:58:47.548879    8620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 20:58:47.577556    8620 command_runner.go:130] > 24.0.2
	I0706 20:58:47.584213    8620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 20:58:47.613905    8620 command_runner.go:130] > 24.0.2
	I0706 20:58:47.621932    8620 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0706 20:58:47.626818    8620 out.go:177]   - env NO_PROXY=172.29.78.0
	I0706 20:58:47.631809    8620 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0706 20:58:47.635802    8620 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0706 20:58:47.635802    8620 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0706 20:58:47.635802    8620 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0706 20:58:47.635802    8620 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:76:79 Flags:up|broadcast|multicast|running}
	I0706 20:58:47.637848    8620 ip.go:210] interface addr: fe80::9492:57c6:5513:d3cc/64
	I0706 20:58:47.637848    8620 ip.go:210] interface addr: 172.29.64.1/20
	I0706 20:58:47.645845    8620 ssh_runner.go:195] Run: grep 172.29.64.1	host.minikube.internal$ /etc/hosts
	I0706 20:58:47.651928    8620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 20:58:47.668287    8620 certs.go:56] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300 for IP: 172.29.74.65
	I0706 20:58:47.668353    8620 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:58:47.669005    8620 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0706 20:58:47.669005    8620 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0706 20:58:47.669604    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0706 20:58:47.669798    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0706 20:58:47.669798    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0706 20:58:47.669798    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0706 20:58:47.670612    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem (1338 bytes)
	W0706 20:58:47.670612    8620 certs.go:433] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256_empty.pem, impossibly tiny 0 bytes
	I0706 20:58:47.671145    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0706 20:58:47.671459    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0706 20:58:47.671459    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0706 20:58:47.671459    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0706 20:58:47.672093    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem (1708 bytes)
	I0706 20:58:47.672093    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /usr/share/ca-certificates/82562.pem
	I0706 20:58:47.672704    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:58:47.672914    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem -> /usr/share/ca-certificates/8256.pem
	I0706 20:58:47.673538    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0706 20:58:47.709050    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0706 20:58:47.744038    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0706 20:58:47.777687    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0706 20:58:47.810878    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /usr/share/ca-certificates/82562.pem (1708 bytes)
	I0706 20:58:47.842326    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0706 20:58:47.878853    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem --> /usr/share/ca-certificates/8256.pem (1338 bytes)
	I0706 20:58:47.920683    8620 ssh_runner.go:195] Run: openssl version
	I0706 20:58:47.928350    8620 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0706 20:58:47.937377    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82562.pem && ln -fs /usr/share/ca-certificates/82562.pem /etc/ssl/certs/82562.pem"
	I0706 20:58:47.960165    8620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82562.pem
	I0706 20:58:47.966746    8620 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 20:58:47.966808    8620 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 20:58:47.976169    8620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82562.pem
	I0706 20:58:47.984966    8620 command_runner.go:130] > 3ec20f2e
	I0706 20:58:47.993946    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/82562.pem /etc/ssl/certs/3ec20f2e.0"
	I0706 20:58:48.017182    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0706 20:58:48.040040    8620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:58:48.046098    8620 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:58:48.046098    8620 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:58:48.055729    8620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0706 20:58:48.062720    8620 command_runner.go:130] > b5213941
	I0706 20:58:48.071002    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0706 20:58:48.094471    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8256.pem && ln -fs /usr/share/ca-certificates/8256.pem /etc/ssl/certs/8256.pem"
	I0706 20:58:48.117908    8620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8256.pem
	I0706 20:58:48.123953    8620 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 20:58:48.124079    8620 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 20:58:48.131952    8620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8256.pem
	I0706 20:58:48.138894    8620 command_runner.go:130] > 51391683
	I0706 20:58:48.148005    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8256.pem /etc/ssl/certs/51391683.0"
	I0706 20:58:48.171745    8620 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0706 20:58:48.176590    8620 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 20:58:48.177512    8620 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 20:58:48.183935    8620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0706 20:58:48.215602    8620 command_runner.go:130] > cgroupfs
	I0706 20:58:48.215757    8620 cni.go:84] Creating CNI manager for ""
	I0706 20:58:48.215757    8620 cni.go:137] 3 nodes found, recommending kindnet
	I0706 20:58:48.215757    8620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0706 20:58:48.215831    8620 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.74.65 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-144300 NodeName:multinode-144300-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.78.0"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.74.65 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0706 20:58:48.216001    8620 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.74.65
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-144300-m02"
	  kubeletExtraArgs:
	    node-ip: 172.29.74.65
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.78.0"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0706 20:58:48.216001    8620 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-144300-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.74.65
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0706 20:58:48.226661    8620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0706 20:58:48.241377    8620 command_runner.go:130] > kubeadm
	I0706 20:58:48.241413    8620 command_runner.go:130] > kubectl
	I0706 20:58:48.241413    8620 command_runner.go:130] > kubelet
	I0706 20:58:48.241413    8620 binaries.go:44] Found k8s binaries, skipping transfer
	I0706 20:58:48.249724    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0706 20:58:48.262100    8620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (381 bytes)
	I0706 20:58:48.288104    8620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0706 20:58:48.318110    8620 ssh_runner.go:195] Run: grep 172.29.78.0	control-plane.minikube.internal$ /etc/hosts
	I0706 20:58:48.323249    8620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.78.0	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 20:58:48.340643    8620 host.go:66] Checking if "multinode-144300" exists ...
	I0706 20:58:48.341666    8620 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:58:48.341666    8620 start.go:301] JoinCluster: &{Name:multinode-144300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.78.0 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.74.65 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.66.123 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:58:48.341666    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0706 20:58:48.341666    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:58:49.011240    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:49.011240    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:49.011348    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:49.957621    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:58:49.957621    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:49.958080    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:58:50.159593    8620 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token bqjgh6.2x95wedqi1eo9br1 --discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d 
	I0706 20:58:50.159687    8620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0": (1.8179658s)
	I0706 20:58:50.159687    8620 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.29.74.65 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0706 20:58:50.159857    8620 host.go:66] Checking if "multinode-144300" exists ...
	I0706 20:58:50.169381    8620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-144300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0706 20:58:50.169381    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:58:50.824113    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:58:50.824113    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:50.824113    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:58:51.784522    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 20:58:51.784730    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:58:51.785222    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:58:51.965442    8620 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0706 20:58:52.035807    8620 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-z6sjf, kube-system/kube-proxy-f5vmt
	I0706 20:58:55.080953    8620 command_runner.go:130] > node/multinode-144300-m02 cordoned
	I0706 20:58:55.081016    8620 command_runner.go:130] > pod "busybox-67b7f59bb-qp6pw" has DeletionTimestamp older than 1 seconds, skipping
	I0706 20:58:55.081016    8620 command_runner.go:130] > node/multinode-144300-m02 drained
	I0706 20:58:55.081187    8620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-144300-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (4.9115994s)
	I0706 20:58:55.081187    8620 node.go:108] successfully drained node "m02"
	I0706 20:58:55.081848    8620 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:58:55.082871    8620 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.78.0:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 20:58:55.083978    8620 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0706 20:58:55.083978    8620 round_trippers.go:463] DELETE https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:58:55.083978    8620 round_trippers.go:469] Request Headers:
	I0706 20:58:55.083978    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:58:55.083978    8620 round_trippers.go:473]     Content-Type: application/json
	I0706 20:58:55.083978    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:58:55.095490    8620 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0706 20:58:55.096353    8620 round_trippers.go:577] Response Headers:
	I0706 20:58:55.096353    8620 round_trippers.go:580]     Audit-Id: e94513d2-0800-412c-aeb0-bc691112f260
	I0706 20:58:55.096424    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:58:55.096424    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:58:55.096424    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:58:55.096424    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:58:55.096424    8620 round_trippers.go:580]     Content-Length: 171
	I0706 20:58:55.096516    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:58:55 GMT
	I0706 20:58:55.096516    8620 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-144300-m02","kind":"nodes","uid":"39c558c0-4469-4217-b2ec-656fe02ca858"}}
	I0706 20:58:55.096577    8620 node.go:124] successfully deleted node "m02"
	I0706 20:58:55.096577    8620 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.29.74.65 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0706 20:58:55.096577    8620 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.29.74.65 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0706 20:58:55.096577    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bqjgh6.2x95wedqi1eo9br1 --discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-144300-m02"
	I0706 20:58:55.355958    8620 command_runner.go:130] ! W0706 20:58:55.357184    1340 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0706 20:58:55.821556    8620 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0706 20:58:57.626429    8620 command_runner.go:130] > [preflight] Running pre-flight checks
	I0706 20:58:57.626500    8620 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0706 20:58:57.626500    8620 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0706 20:58:57.626559    8620 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0706 20:58:57.626559    8620 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0706 20:58:57.626559    8620 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0706 20:58:57.626618    8620 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0706 20:58:57.626618    8620 command_runner.go:130] > This node has joined the cluster:
	I0706 20:58:57.626618    8620 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0706 20:58:57.626618    8620 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0706 20:58:57.626671    8620 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0706 20:58:57.626696    8620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token bqjgh6.2x95wedqi1eo9br1 --discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-144300-m02": (2.5301006s)
	I0706 20:58:57.626696    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0706 20:58:58.037143    8620 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0706 20:58:58.037193    8620 start.go:303] JoinCluster complete in 9.6954555s
	I0706 20:58:58.037193    8620 cni.go:84] Creating CNI manager for ""
	I0706 20:58:58.037193    8620 cni.go:137] 3 nodes found, recommending kindnet
	I0706 20:58:58.047641    8620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0706 20:58:58.055506    8620 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0706 20:58:58.055614    8620 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0706 20:58:58.055614    8620 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0706 20:58:58.055614    8620 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0706 20:58:58.055614    8620 command_runner.go:130] > Access: 2023-07-06 20:56:55.006854400 +0000
	I0706 20:58:58.055614    8620 command_runner.go:130] > Modify: 2023-06-30 22:28:30.000000000 +0000
	I0706 20:58:58.055614    8620 command_runner.go:130] > Change: 2023-07-06 20:56:46.220000000 +0000
	I0706 20:58:58.055614    8620 command_runner.go:130] >  Birth: -
	I0706 20:58:58.055740    8620 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0706 20:58:58.055832    8620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0706 20:58:58.102887    8620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0706 20:58:58.527886    8620 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0706 20:58:58.527922    8620 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0706 20:58:58.527922    8620 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0706 20:58:58.527922    8620 command_runner.go:130] > daemonset.apps/kindnet configured
	I0706 20:58:58.528984    8620 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:58:58.529839    8620 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.78.0:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 20:58:58.530732    8620 round_trippers.go:463] GET https://172.29.78.0:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0706 20:58:58.530756    8620 round_trippers.go:469] Request Headers:
	I0706 20:58:58.530756    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:58:58.530756    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:58:58.533475    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:58:58.534189    8620 round_trippers.go:577] Response Headers:
	I0706 20:58:58.534189    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:58:58.534189    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:58:58.534189    8620 round_trippers.go:580]     Content-Length: 292
	I0706 20:58:58.534189    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:58:58 GMT
	I0706 20:58:58.534189    8620 round_trippers.go:580]     Audit-Id: 969f11e1-953b-4912-b233-29ddf82d8381
	I0706 20:58:58.534189    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:58:58.534261    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:58:58.534299    8620 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b60ac185-d6d8-49e6-a9b3-f1d59eb7807f","resourceVersion":"1246","creationTimestamp":"2023-07-06T20:46:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0706 20:58:58.534473    8620 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-144300" context rescaled to 1 replicas
	I0706 20:58:58.534473    8620 start.go:223] Will wait 6m0s for node &{Name:m02 IP:172.29.74.65 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0706 20:58:58.538855    8620 out.go:177] * Verifying Kubernetes components...
	I0706 20:58:58.549934    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:58:58.578254    8620 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:58:58.578585    8620 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.78.0:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 20:58:58.579425    8620 node_ready.go:35] waiting up to 6m0s for node "multinode-144300-m02" to be "Ready" ...
	I0706 20:58:58.579425    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:58:58.579425    8620 round_trippers.go:469] Request Headers:
	I0706 20:58:58.579425    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:58:58.579425    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:58:58.582666    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:58:58.582666    8620 round_trippers.go:577] Response Headers:
	I0706 20:58:58.583036    8620 round_trippers.go:580]     Audit-Id: 0cee8c99-e4c1-4b17-b4a1-9df40ac85e5d
	I0706 20:58:58.583036    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:58:58.583036    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:58:58.583036    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:58:58.583036    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:58:58.583083    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:58:58 GMT
	I0706 20:58:58.583112    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1348","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 3228 chars]
	I0706 20:58:59.099247    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:58:59.099247    8620 round_trippers.go:469] Request Headers:
	I0706 20:58:59.099247    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:58:59.099247    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:58:59.102867    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:58:59.103531    8620 round_trippers.go:577] Response Headers:
	I0706 20:58:59.103531    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:58:59 GMT
	I0706 20:58:59.103531    8620 round_trippers.go:580]     Audit-Id: ec3ab115-4582-451a-92f7-88fd803cb2b0
	I0706 20:58:59.103531    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:58:59.103531    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:58:59.103531    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:58:59.103531    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:58:59.103720    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1348","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 3228 chars]
	I0706 20:58:59.584510    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:58:59.584510    8620 round_trippers.go:469] Request Headers:
	I0706 20:58:59.584510    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:58:59.584510    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:58:59.588138    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:58:59.588466    8620 round_trippers.go:577] Response Headers:
	I0706 20:58:59.588466    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:58:59.588466    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:58:59.588466    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:58:59.588466    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:58:59.588466    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:58:59 GMT
	I0706 20:58:59.588466    8620 round_trippers.go:580]     Audit-Id: 5fb2c25d-609b-4056-99bd-ba50d94be73c
	I0706 20:58:59.588466    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1348","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 3228 chars]
	I0706 20:59:00.097113    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:00.097113    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:00.097244    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:00.097244    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:00.100103    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:00.100103    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:00.100103    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:00.100103    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:00.100103    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:00 GMT
	I0706 20:59:00.100103    8620 round_trippers.go:580]     Audit-Id: 914ad52b-1266-422c-ba10-57550b9e3ed6
	I0706 20:59:00.100103    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:00.101035    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:00.101141    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1348","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 3228 chars]
	I0706 20:59:00.584553    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:00.584553    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:00.584720    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:00.584720    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:00.587606    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:00.587606    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:00.587606    8620 round_trippers.go:580]     Audit-Id: a2fb6de6-fc43-4f49-adc3-b9da3d4017d9
	I0706 20:59:00.587606    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:00.587606    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:00.587606    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:00.587606    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:00.588685    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:00 GMT
	I0706 20:59:00.588786    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1348","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 3228 chars]
	I0706 20:59:00.589334    8620 node_ready.go:58] node "multinode-144300-m02" has status "Ready":"False"
	I0706 20:59:01.086142    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:01.086336    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:01.086336    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:01.086336    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:01.089891    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:01.089891    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:01.089891    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:01.089891    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:01.089891    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:01.089891    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:01.089891    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:01 GMT
	I0706 20:59:01.090577    8620 round_trippers.go:580]     Audit-Id: 0906307e-9e3a-405f-907a-0ab203d6305a
	I0706 20:59:01.090794    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1348","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 3228 chars]
	I0706 20:59:01.585575    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:01.585803    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:01.585897    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:01.585897    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:01.589614    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:01.589798    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:01.589798    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:01.589798    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:01.589798    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:01.589798    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:01.589890    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:01 GMT
	I0706 20:59:01.589890    8620 round_trippers.go:580]     Audit-Id: ea7a05e4-1bcf-47f1-b58c-1fd951f31031
	I0706 20:59:01.590013    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:02.087246    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:02.087317    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:02.087317    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:02.087374    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:02.090813    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:02.091451    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:02.091451    8620 round_trippers.go:580]     Audit-Id: 37a92918-3ca9-4a4e-b6a8-a5c7e10f1da2
	I0706 20:59:02.091451    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:02.091451    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:02.091451    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:02.091451    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:02.091451    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:02 GMT
	I0706 20:59:02.091790    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:02.591596    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:02.591682    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:02.591682    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:02.591682    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:02.596029    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:59:02.596029    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:02.596029    8620 round_trippers.go:580]     Audit-Id: ff4a09f5-da50-4b0d-a86f-7b51effcf88b
	I0706 20:59:02.596029    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:02.596284    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:02.596284    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:02.596284    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:02.596284    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:02 GMT
	I0706 20:59:02.596389    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:02.596878    8620 node_ready.go:58] node "multinode-144300-m02" has status "Ready":"False"
	I0706 20:59:03.093241    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:03.093346    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:03.093346    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:03.093346    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:03.096600    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:03.096600    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:03.096600    8620 round_trippers.go:580]     Audit-Id: 175b55cb-4e90-4dc3-85db-c4fb737aeae4
	I0706 20:59:03.096600    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:03.096600    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:03.096600    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:03.097336    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:03.097336    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:03 GMT
	I0706 20:59:03.097336    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:03.590511    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:03.590600    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:03.590600    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:03.590600    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:03.593929    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:03.594251    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:03.594251    8620 round_trippers.go:580]     Audit-Id: 9df4d270-9d12-4b31-a524-8db894e4811f
	I0706 20:59:03.594251    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:03.594251    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:03.594340    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:03.594340    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:03.594340    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:03 GMT
	I0706 20:59:03.594475    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:04.087819    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:04.087819    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:04.087819    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:04.087888    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:04.090596    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:04.090596    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:04.090596    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:04.090596    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:04.091353    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:04 GMT
	I0706 20:59:04.091353    8620 round_trippers.go:580]     Audit-Id: 6dc0c1c7-fdb0-49bf-838b-19976a9e6142
	I0706 20:59:04.091353    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:04.091353    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:04.091428    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:04.587205    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:04.587279    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:04.587279    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:04.587279    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:04.591099    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:04.591402    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:04.591402    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:04.591402    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:04.591402    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:04.591489    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:04 GMT
	I0706 20:59:04.591489    8620 round_trippers.go:580]     Audit-Id: ffd946c6-0310-4418-abce-2149bec48844
	I0706 20:59:04.591489    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:04.591489    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:05.086997    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:05.087048    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:05.087048    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:05.087119    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:05.090002    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:05.090002    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:05.090002    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:05 GMT
	I0706 20:59:05.090002    8620 round_trippers.go:580]     Audit-Id: 3f9cf314-6d19-482b-9975-80274c0ce3b6
	I0706 20:59:05.090816    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:05.090816    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:05.090816    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:05.090816    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:05.091163    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:05.091661    8620 node_ready.go:58] node "multinode-144300-m02" has status "Ready":"False"
	I0706 20:59:05.587160    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:05.587232    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:05.587232    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:05.587357    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:05.590991    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:05.590991    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:05.590991    8620 round_trippers.go:580]     Audit-Id: 479ecfec-a94e-4ed8-8972-045af5447e61
	I0706 20:59:05.590991    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:05.590991    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:05.591228    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:05.591228    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:05.591228    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:05 GMT
	I0706 20:59:05.591399    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:06.086587    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:06.086936    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:06.086936    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:06.086936    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:06.089169    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:06.089169    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:06.090012    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:06.090012    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:06 GMT
	I0706 20:59:06.090012    8620 round_trippers.go:580]     Audit-Id: 43eff21a-2d4a-4834-9b07-0a24c3163425
	I0706 20:59:06.090012    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:06.090012    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:06.090012    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:06.090182    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:06.591882    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:06.591882    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:06.591882    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:06.591882    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:06.594473    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:06.594473    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:06.594473    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:06.594473    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:06.594473    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:06.594473    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:06.595477    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:06 GMT
	I0706 20:59:06.595477    8620 round_trippers.go:580]     Audit-Id: 883a9b9b-90b8-4810-8433-69068995d8cc
	I0706 20:59:06.595477    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1359","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3337 chars]
	I0706 20:59:07.093823    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:07.093978    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.093978    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.093978    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.097688    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:07.098027    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.098027    8620 round_trippers.go:580]     Audit-Id: d3872832-c5a1-41f0-98f2-300921a3c801
	I0706 20:59:07.098027    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.098027    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.098027    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.098027    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.098027    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.098247    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1375","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3506 chars]
	I0706 20:59:07.098784    8620 node_ready.go:58] node "multinode-144300-m02" has status "Ready":"False"
	I0706 20:59:07.594456    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:07.594542    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.594542    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.594542    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.597499    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:07.597499    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.597833    8620 round_trippers.go:580]     Audit-Id: cf4e087a-54a2-4b6d-bd1b-9763f99e69a2
	I0706 20:59:07.597833    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.597833    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.597833    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.597833    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.598005    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.598130    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1378","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3372 chars]
	I0706 20:59:07.598774    8620 node_ready.go:49] node "multinode-144300-m02" has status "Ready":"True"
	I0706 20:59:07.598881    8620 node_ready.go:38] duration metric: took 9.0193897s waiting for node "multinode-144300-m02" to be "Ready" ...
	I0706 20:59:07.598881    8620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 20:59:07.599095    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods
	I0706 20:59:07.599172    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.599172    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.599172    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.604562    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:59:07.604562    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.604562    8620 round_trippers.go:580]     Audit-Id: 447cd442-b2d9-4977-88ba-9a750aaecef3
	I0706 20:59:07.604562    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.604562    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.604562    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.604921    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.604921    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.607543    8620 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1380"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1242","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 83292 chars]
	I0706 20:59:07.611655    8620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:07.611832    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 20:59:07.611832    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.611832    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.611832    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.614134    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:07.614134    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.614134    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.614134    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.614134    8620 round_trippers.go:580]     Audit-Id: 430a4b16-31ae-4352-a415-b4add290c28f
	I0706 20:59:07.614134    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.614134    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.615082    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.615260    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1242","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6491 chars]
	I0706 20:59:07.615758    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:59:07.615857    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.615857    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.615857    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.619744    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:07.619774    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.619863    8620 round_trippers.go:580]     Audit-Id: 64451d06-32f5-4557-be5f-105695bd7201
	I0706 20:59:07.619863    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.619863    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.619863    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.619863    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.619912    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.620130    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:59:07.620632    8620 pod_ready.go:92] pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace has status "Ready":"True"
	I0706 20:59:07.620632    8620 pod_ready.go:81] duration metric: took 8.9767ms waiting for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:07.620632    8620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:07.620789    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-144300
	I0706 20:59:07.620789    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.620789    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.620858    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.625840    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:59:07.625840    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.625840    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.625840    8620 round_trippers.go:580]     Audit-Id: eebbb0f3-a17b-4c33-af78-78fabbf2c775
	I0706 20:59:07.625840    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.625840    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.625840    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.625840    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.626564    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-144300","namespace":"kube-system","uid":"3cf71374-8b9f-4bee-a5a7-538dcf09ed5e","resourceVersion":"1211","creationTimestamp":"2023-07-06T20:57:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.78.0:2379","kubernetes.io/config.hash":"fcb31617214a528ab159c24a1103b7af","kubernetes.io/config.mirror":"fcb31617214a528ab159c24a1103b7af","kubernetes.io/config.seen":"2023-07-06T20:57:27.010845433Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5843 chars]
	I0706 20:59:07.626564    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:59:07.626564    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.626564    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.626564    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.629977    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:07.629977    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.629977    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.629977    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.629977    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.630176    8620 round_trippers.go:580]     Audit-Id: 2b26e402-92a1-4da9-89c4-1a06338fb812
	I0706 20:59:07.630176    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.630176    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.630580    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:59:07.630990    8620 pod_ready.go:92] pod "etcd-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:59:07.631030    8620 pod_ready.go:81] duration metric: took 10.3976ms waiting for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:07.631091    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:07.631114    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-144300
	I0706 20:59:07.631114    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.631114    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.631114    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.634159    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:07.634431    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.634431    8620 round_trippers.go:580]     Audit-Id: 340890d8-b121-4b57-a654-31f48e49e0b2
	I0706 20:59:07.634483    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.634483    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.634483    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.634483    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.634483    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.634483    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-144300","namespace":"kube-system","uid":"c3e05753-1404-4779-b0dd-d7bf63b44bdd","resourceVersion":"1205","creationTimestamp":"2023-07-06T20:57:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.78.0:8443","kubernetes.io/config.hash":"39dd3fec037ddc9365ef4418fd161ea0","kubernetes.io/config.mirror":"39dd3fec037ddc9365ef4418fd161ea0","kubernetes.io/config.seen":"2023-07-06T20:57:27.010850733Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7382 chars]
	I0706 20:59:07.635102    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:59:07.635102    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.635102    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.635102    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.642034    8620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:59:07.642034    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.642582    8620 round_trippers.go:580]     Audit-Id: dff03e02-68af-4a0a-a3cf-68cc2c46aa42
	I0706 20:59:07.642582    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.642582    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.642636    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.642636    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.642669    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.642884    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:59:07.642914    8620 pod_ready.go:92] pod "kube-apiserver-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:59:07.642914    8620 pod_ready.go:81] duration metric: took 11.8233ms waiting for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:07.642914    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:07.642914    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-144300
	I0706 20:59:07.642914    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.642914    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.642914    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.645356    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:07.645356    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.645356    8620 round_trippers.go:580]     Audit-Id: 4e606b7b-3c8c-4323-8d76-202ab82dcfc9
	I0706 20:59:07.645356    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.645356    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.645356    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.645356    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.645356    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.645356    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-144300","namespace":"kube-system","uid":"d9a60269-68e9-4ea2-82fe-63cedee225ef","resourceVersion":"1214","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.mirror":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.seen":"2023-07-06T20:46:36.035686687Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7167 chars]
	I0706 20:59:07.646969    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:59:07.647019    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.647019    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.647019    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.653722    8620 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0706 20:59:07.653722    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.653722    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.653722    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.653787    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.653787    8620 round_trippers.go:580]     Audit-Id: 29d55826-ab0c-4d5e-b64e-c017cfaaa392
	I0706 20:59:07.653787    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.653787    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.653787    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:59:07.654326    8620 pod_ready.go:92] pod "kube-controller-manager-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:59:07.654399    8620 pod_ready.go:81] duration metric: took 11.4844ms waiting for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:07.654399    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5vmt" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:07.797909    8620 request.go:628] Waited for 143.2745ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5vmt
	I0706 20:59:07.798041    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5vmt
	I0706 20:59:07.798041    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:07.798234    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:07.798234    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:07.803172    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:59:07.803172    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:07.803172    8620 round_trippers.go:580]     Audit-Id: ee7352e2-fd1c-4b2c-ab88-3157975fcca4
	I0706 20:59:07.803172    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:07.803172    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:07.803172    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:07.803172    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:07.803720    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:07 GMT
	I0706 20:59:07.803892    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f5vmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"e615de7b-b4a0-4060-aecd-0581b032227d","resourceVersion":"1361","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0706 20:59:08.001024    8620 request.go:628] Waited for 196.1413ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:08.001261    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 20:59:08.001288    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:08.001288    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:08.001288    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:08.005075    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:08.005370    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:08.005370    8620 round_trippers.go:580]     Audit-Id: 82d89c7f-5229-440d-ba88-33c4077b9ed9
	I0706 20:59:08.005370    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:08.005370    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:08.005370    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:08.005370    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:08.005509    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:08 GMT
	I0706 20:59:08.005509    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1378","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3372 chars]
	I0706 20:59:08.005509    8620 pod_ready.go:92] pod "kube-proxy-f5vmt" in "kube-system" namespace has status "Ready":"True"
	I0706 20:59:08.006060    8620 pod_ready.go:81] duration metric: took 351.6585ms waiting for pod "kube-proxy-f5vmt" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:08.006060    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:08.203264    8620 request.go:628] Waited for 196.79ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 20:59:08.203444    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 20:59:08.203444    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:08.203550    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:08.203550    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:08.209256    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:59:08.209256    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:08.209256    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:08.209256    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:08.209256    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:08 GMT
	I0706 20:59:08.209256    8620 round_trippers.go:580]     Audit-Id: 1ed9cbcd-4ca8-4546-8747-54dcf2a2d8fa
	I0706 20:59:08.209256    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:08.209256    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:08.210036    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h6h62","generateName":"kube-proxy-","namespace":"kube-system","uid":"6949ff1e-f5c0-4ab2-ae7f-6b30775e220d","resourceVersion":"1170","creationTimestamp":"2023-07-06T20:46:49Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0706 20:59:08.404647    8620 request.go:628] Waited for 193.7873ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:59:08.404976    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:59:08.404976    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:08.404976    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:08.405050    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:08.409766    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:59:08.410050    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:08.410050    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:08 GMT
	I0706 20:59:08.410050    8620 round_trippers.go:580]     Audit-Id: e51c136c-8ae3-4fd3-85c8-02029e7ace20
	I0706 20:59:08.410050    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:08.410169    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:08.410169    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:08.410169    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:08.410497    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:59:08.410967    8620 pod_ready.go:92] pod "kube-proxy-h6h62" in "kube-system" namespace has status "Ready":"True"
	I0706 20:59:08.410967    8620 pod_ready.go:81] duration metric: took 404.9046ms waiting for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:08.410967    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x7bwf" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:08.605587    8620 request.go:628] Waited for 194.4631ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7bwf
	I0706 20:59:08.605665    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7bwf
	I0706 20:59:08.605665    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:08.605665    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:08.605767    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:08.610791    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 20:59:08.610791    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:08.610791    8620 round_trippers.go:580]     Audit-Id: c66981a0-7b9f-494a-8bbd-fe9f744da9bd
	I0706 20:59:08.610856    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:08.610882    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:08.610882    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:08.610882    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:08.610882    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:08 GMT
	I0706 20:59:08.611079    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x7bwf","generateName":"kube-proxy-","namespace":"kube-system","uid":"3326b20f-277b-435c-8b7e-7d305167affb","resourceVersion":"1275","creationTimestamp":"2023-07-06T20:50:55Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:50:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5972 chars]
	I0706 20:59:08.807948    8620 request.go:628] Waited for 196.0277ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 20:59:08.808277    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 20:59:08.808277    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:08.808277    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:08.808277    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:08.810723    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 20:59:08.810723    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:08.810723    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:08.810723    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:08.810723    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:08.810723    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:08 GMT
	I0706 20:59:08.811623    8620 round_trippers.go:580]     Audit-Id: fff1e0dc-16ce-4563-9176-66705c1dadcc
	I0706 20:59:08.811623    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:08.811771    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"9147f70e-3f8f-4f6c-98f8-6e9530ca9678","resourceVersion":"1295","creationTimestamp":"2023-07-06T20:55:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:55:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3840 chars]
	I0706 20:59:08.812163    8620 pod_ready.go:97] node "multinode-144300-m03" hosting pod "kube-proxy-x7bwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300-m03" has status "Ready":"Unknown"
	I0706 20:59:08.812264    8620 pod_ready.go:81] duration metric: took 401.1925ms waiting for pod "kube-proxy-x7bwf" in "kube-system" namespace to be "Ready" ...
	E0706 20:59:08.812264    8620 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-144300-m03" hosting pod "kube-proxy-x7bwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-144300-m03" has status "Ready":"Unknown"
	I0706 20:59:08.812264    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:08.995717    8620 request.go:628] Waited for 183.2277ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 20:59:08.995817    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 20:59:08.995817    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:08.995817    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:08.995817    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:09.000372    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:09.000454    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:09.000454    8620 round_trippers.go:580]     Audit-Id: 248f0216-70ab-4efa-bb2e-95edd9eb6576
	I0706 20:59:09.000454    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:09.000454    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:09.000454    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:09.000537    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:09.000577    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:09 GMT
	I0706 20:59:09.000828    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-144300","namespace":"kube-system","uid":"70e904dd-fca0-436e-84d9-101fbc1cd9b0","resourceVersion":"1227","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.mirror":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.seen":"2023-07-06T20:46:36.035687887Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4897 chars]
	I0706 20:59:09.198355    8620 request.go:628] Waited for 196.6334ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:59:09.198546    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 20:59:09.198546    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:09.198546    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:09.198546    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:09.203982    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 20:59:09.203982    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:09.203982    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:09 GMT
	I0706 20:59:09.203982    8620 round_trippers.go:580]     Audit-Id: 580c9d99-0549-4860-9ea5-c24538cccef0
	I0706 20:59:09.203982    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:09.203982    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:09.203982    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:09.203982    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:09.204664    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 20:59:09.204728    8620 pod_ready.go:92] pod "kube-scheduler-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 20:59:09.204728    8620 pod_ready.go:81] duration metric: took 392.4613ms waiting for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 20:59:09.204728    8620 pod_ready.go:38] duration metric: took 1.6058354s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 20:59:09.204728    8620 system_svc.go:44] waiting for kubelet service to be running ....
	I0706 20:59:09.215152    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:59:09.234333    8620 system_svc.go:56] duration metric: took 29.6049ms WaitForService to wait for kubelet.
	I0706 20:59:09.234333    8620 kubeadm.go:581] duration metric: took 10.6997817s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0706 20:59:09.234333    8620 node_conditions.go:102] verifying NodePressure condition ...
	I0706 20:59:09.400552    8620 request.go:628] Waited for 166.0523ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes
	I0706 20:59:09.400552    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes
	I0706 20:59:09.400552    8620 round_trippers.go:469] Request Headers:
	I0706 20:59:09.400552    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 20:59:09.400552    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 20:59:09.404125    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 20:59:09.404125    8620 round_trippers.go:577] Response Headers:
	I0706 20:59:09.404125    8620 round_trippers.go:580]     Audit-Id: 24b54492-6894-4834-a6b5-2e84fb362f23
	I0706 20:59:09.404125    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 20:59:09.404125    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 20:59:09.405087    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 20:59:09.405140    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 20:59:09.405140    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 20:59:09 GMT
	I0706 20:59:09.405291    8620 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1381"},"items":[{"metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14485 chars]
	I0706 20:59:09.406178    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:59:09.406178    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 20:59:09.406178    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:59:09.406178    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 20:59:09.406178    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 20:59:09.406178    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 20:59:09.406178    8620 node_conditions.go:105] duration metric: took 171.8441ms to run NodePressure ...
	I0706 20:59:09.406178    8620 start.go:228] waiting for startup goroutines ...
	I0706 20:59:09.406178    8620 start.go:242] writing updated cluster config ...
	I0706 20:59:09.422904    8620 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:59:09.423255    8620 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:59:09.430897    8620 out.go:177] * Starting worker node multinode-144300-m03 in cluster multinode-144300
	I0706 20:59:09.435244    8620 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 20:59:09.435244    8620 cache.go:57] Caching tarball of preloaded images
	I0706 20:59:09.435244    8620 preload.go:174] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0706 20:59:09.435788    8620 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 20:59:09.436141    8620 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:59:09.438873    8620 start.go:365] acquiring machines lock for multinode-144300-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 20:59:09.438873    8620 start.go:369] acquired machines lock for "multinode-144300-m03" in 0s
	I0706 20:59:09.438873    8620 start.go:96] Skipping create...Using existing machine configuration
	I0706 20:59:09.438873    8620 fix.go:54] fixHost starting: m03
	I0706 20:59:09.438873    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:10.079085    8620 main.go:141] libmachine: [stdout =====>] : Off
	
	I0706 20:59:10.079085    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:10.079085    8620 fix.go:102] recreateIfNeeded on multinode-144300-m03: state=Stopped err=<nil>
	W0706 20:59:10.079085    8620 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 20:59:10.084082    8620 out.go:177] * Restarting existing hyperv VM for "multinode-144300-m03" ...
	I0706 20:59:10.086571    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-144300-m03
	I0706 20:59:11.652822    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:59:11.653031    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:11.653031    8620 main.go:141] libmachine: Waiting for host to start...
	I0706 20:59:11.653080    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:12.334898    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:12.334936    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:12.335188    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:13.296697    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:59:13.296938    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:14.299850    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:14.948766    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:14.949122    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:14.949152    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:15.868479    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:59:15.868699    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:16.883065    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:17.564573    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:17.564573    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:17.564702    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:18.527619    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:59:18.527619    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:19.542006    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:20.222607    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:20.222913    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:20.222913    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:21.157581    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:59:21.157821    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:22.158554    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:22.857906    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:22.857906    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:22.858151    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:23.790717    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:59:23.790797    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:24.793158    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:25.458381    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:25.458531    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:25.458531    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:26.409637    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:59:26.409637    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:27.410757    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:28.080018    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:28.080018    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:28.080131    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:29.029653    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:59:29.029798    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:30.035465    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:30.717961    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:30.717961    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:30.718035    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:31.689263    8620 main.go:141] libmachine: [stdout =====>] : 
	I0706 20:59:31.689549    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:32.701128    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:33.386067    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:33.386067    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:33.386149    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:34.478202    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:34.478202    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:34.480840    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:35.188579    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:35.188579    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:35.188800    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:36.165239    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:36.165239    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:36.165779    8620 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300\config.json ...
	I0706 20:59:36.168316    8620 machine.go:88] provisioning docker machine ...
	I0706 20:59:36.168392    8620 buildroot.go:166] provisioning hostname "multinode-144300-m03"
	I0706 20:59:36.168466    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:36.862104    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:36.862104    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:36.862104    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:37.828295    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:37.828649    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:37.832113    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:59:37.832922    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.173 22 <nil> <nil>}
	I0706 20:59:37.832922    8620 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-144300-m03 && echo "multinode-144300-m03" | sudo tee /etc/hostname
	I0706 20:59:37.981901    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-144300-m03
	
	I0706 20:59:37.981901    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:38.681953    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:38.682121    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:38.682121    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:39.675920    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:39.676052    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:39.679299    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:59:39.680241    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.173 22 <nil> <nil>}
	I0706 20:59:39.680241    8620 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-144300-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-144300-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-144300-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 20:59:39.834263    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 20:59:39.834263    8620 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0706 20:59:39.834263    8620 buildroot.go:174] setting up certificates
	I0706 20:59:39.834263    8620 provision.go:83] configureAuth start
	I0706 20:59:39.834263    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:40.541530    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:40.541530    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:40.541711    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:41.502673    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:41.502673    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:41.502761    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:42.176963    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:42.176963    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:42.176963    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:43.142702    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:43.142702    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:43.142817    8620 provision.go:138] copyHostCerts
	I0706 20:59:43.142939    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0706 20:59:43.143199    8620 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0706 20:59:43.143256    8620 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0706 20:59:43.143385    8620 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0706 20:59:43.144782    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0706 20:59:43.145015    8620 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0706 20:59:43.145073    8620 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0706 20:59:43.145398    8620 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0706 20:59:43.146532    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0706 20:59:43.146754    8620 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0706 20:59:43.146826    8620 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0706 20:59:43.147229    8620 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0706 20:59:43.148004    8620 provision.go:112] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-144300-m03 san=[172.29.78.173 172.29.78.173 localhost 127.0.0.1 minikube multinode-144300-m03]
	I0706 20:59:43.321262    8620 provision.go:172] copyRemoteCerts
	I0706 20:59:43.331788    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 20:59:43.331862    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:43.983611    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:43.983611    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:43.983696    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:44.931761    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:44.932044    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:44.932799    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m03\id_rsa Username:docker}
	I0706 20:59:45.039766    8620 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.7078898s)
	I0706 20:59:45.039822    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0706 20:59:45.040277    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 20:59:45.081663    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0706 20:59:45.081782    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1237 bytes)
	I0706 20:59:45.119153    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0706 20:59:45.119603    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0706 20:59:45.156337    8620 provision.go:86] duration metric: configureAuth took 5.3220356s
	I0706 20:59:45.156337    8620 buildroot.go:189] setting minikube options for container-runtime
	I0706 20:59:45.157628    8620 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:59:45.157703    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:45.848129    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:45.848399    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:45.848399    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:46.814043    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:46.814043    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:46.820551    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:59:46.821259    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.173 22 <nil> <nil>}
	I0706 20:59:46.821259    8620 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 20:59:46.946717    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 20:59:46.946717    8620 buildroot.go:70] root file system type: tmpfs
	I0706 20:59:46.946717    8620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 20:59:46.946717    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:47.620613    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:47.620613    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:47.620797    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:48.568706    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:48.568706    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:48.572524    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:59:48.573457    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.173 22 <nil> <nil>}
	I0706 20:59:48.573457    8620 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.78.0"
	Environment="NO_PROXY=172.29.78.0,172.29.74.65"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 20:59:48.720755    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.78.0
	Environment=NO_PROXY=172.29.78.0,172.29.74.65
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 20:59:48.720896    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:49.403132    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:49.403455    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:49.403573    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:50.351513    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:50.351513    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:50.355493    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:59:50.356388    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.173 22 <nil> <nil>}
	I0706 20:59:50.356491    8620 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 20:59:51.557077    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 20:59:51.557077    8620 machine.go:91] provisioned docker machine in 15.3885732s
	I0706 20:59:51.557077    8620 start.go:300] post-start starting for "multinode-144300-m03" (driver="hyperv")
	I0706 20:59:51.557077    8620 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 20:59:51.566958    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 20:59:51.566958    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:52.244148    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:52.244148    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:52.244380    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:53.236382    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:53.236382    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:53.236790    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m03\id_rsa Username:docker}
	I0706 20:59:53.347353    8620 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.7803822s)
	I0706 20:59:53.357554    8620 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 20:59:53.363018    8620 command_runner.go:130] > NAME=Buildroot
	I0706 20:59:53.363018    8620 command_runner.go:130] > VERSION=2021.02.12-1-g6f2898e-dirty
	I0706 20:59:53.363018    8620 command_runner.go:130] > ID=buildroot
	I0706 20:59:53.363018    8620 command_runner.go:130] > VERSION_ID=2021.02.12
	I0706 20:59:53.363018    8620 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0706 20:59:53.363018    8620 info.go:137] Remote host: Buildroot 2021.02.12
	I0706 20:59:53.363018    8620 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0706 20:59:53.363682    8620 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0706 20:59:53.364484    8620 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> 82562.pem in /etc/ssl/certs
	I0706 20:59:53.364484    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /etc/ssl/certs/82562.pem
	I0706 20:59:53.374290    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 20:59:53.388451    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /etc/ssl/certs/82562.pem (1708 bytes)
	I0706 20:59:53.430708    8620 start.go:303] post-start completed in 1.8736173s
	I0706 20:59:53.430708    8620 fix.go:56] fixHost completed within 43.9915144s
	I0706 20:59:53.430708    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:54.076564    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:54.076564    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:54.076662    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:55.055925    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:55.055925    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:55.060347    8620 main.go:141] libmachine: Using SSH client type: native
	I0706 20:59:55.061123    8620 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.78.173 22 <nil> <nil>}
	I0706 20:59:55.061307    8620 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0706 20:59:55.184683    8620 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688677195.184864990
	
	I0706 20:59:55.184795    8620 fix.go:206] guest clock: 1688677195.184864990
	I0706 20:59:55.184795    8620 fix.go:219] Guest: 2023-07-06 20:59:55.18486499 +0000 UTC Remote: 2023-07-06 20:59:53.4307085 +0000 UTC m=+204.225972701 (delta=1.75415649s)
	I0706 20:59:55.184795    8620 fix.go:190] guest clock delta is within tolerance: 1.75415649s
	I0706 20:59:55.184795    8620 start.go:83] releasing machines lock for "multinode-144300-m03", held for 45.7455883s
	I0706 20:59:55.185050    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:55.828247    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:55.828247    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:55.828319    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:56.790178    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:56.790609    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:56.793586    8620 out.go:177] * Found network options:
	I0706 20:59:56.796635    8620 out.go:177]   - NO_PROXY=172.29.78.0,172.29.74.65
	W0706 20:59:56.799236    8620 proxy.go:119] fail to check proxy env: Error ip not in block
	W0706 20:59:56.799236    8620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0706 20:59:56.801738    8620 out.go:177]   - no_proxy=172.29.78.0,172.29.74.65
	W0706 20:59:56.803975    8620 proxy.go:119] fail to check proxy env: Error ip not in block
	W0706 20:59:56.803975    8620 proxy.go:119] fail to check proxy env: Error ip not in block
	W0706 20:59:56.804807    8620 proxy.go:119] fail to check proxy env: Error ip not in block
	W0706 20:59:56.804807    8620 proxy.go:119] fail to check proxy env: Error ip not in block
	I0706 20:59:56.806805    8620 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 20:59:56.806805    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:56.814785    8620 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0706 20:59:56.814785    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:59:57.520806    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:57.520902    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:57.520806    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:59:57.520902    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:57.520902    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:57.521011    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m03 ).networkadapters[0]).ipaddresses[0]
	I0706 20:59:58.665807    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:58.666556    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:58.667118    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m03\id_rsa Username:docker}
	I0706 20:59:58.684334    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.173
	
	I0706 20:59:58.685011    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:59:58.685334    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m03\id_rsa Username:docker}
	I0706 20:59:58.758827    8620 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0706 20:59:58.759031    8620 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (1.944231s)
	W0706 20:59:58.759069    8620 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 20:59:58.769483    8620 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0706 20:59:58.844417    8620 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0706 20:59:58.844417    8620 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.0375971s)
	I0706 20:59:58.844609    8620 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0706 20:59:58.844646    8620 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0706 20:59:58.844646    8620 start.go:466] detecting cgroup driver to use...
	I0706 20:59:58.844907    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 20:59:58.870958    8620 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0706 20:59:58.879704    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0706 20:59:58.904608    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 20:59:58.918524    8620 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 20:59:58.930103    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 20:59:58.954556    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 20:59:58.978338    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 20:59:59.004694    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 20:59:59.028975    8620 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 20:59:59.052875    8620 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 20:59:59.078372    8620 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 20:59:59.091666    8620 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0706 20:59:59.101845    8620 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 20:59:59.123894    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 20:59:59.267280    8620 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 20:59:59.293534    8620 start.go:466] detecting cgroup driver to use...
	I0706 20:59:59.302368    8620 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 20:59:59.324640    8620 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0706 20:59:59.324640    8620 command_runner.go:130] > [Unit]
	I0706 20:59:59.324640    8620 command_runner.go:130] > Description=Docker Application Container Engine
	I0706 20:59:59.324640    8620 command_runner.go:130] > Documentation=https://docs.docker.com
	I0706 20:59:59.324640    8620 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0706 20:59:59.324640    8620 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0706 20:59:59.324640    8620 command_runner.go:130] > StartLimitBurst=3
	I0706 20:59:59.324640    8620 command_runner.go:130] > StartLimitIntervalSec=60
	I0706 20:59:59.325393    8620 command_runner.go:130] > [Service]
	I0706 20:59:59.325393    8620 command_runner.go:130] > Type=notify
	I0706 20:59:59.325393    8620 command_runner.go:130] > Restart=on-failure
	I0706 20:59:59.325438    8620 command_runner.go:130] > Environment=NO_PROXY=172.29.78.0
	I0706 20:59:59.325455    8620 command_runner.go:130] > Environment=NO_PROXY=172.29.78.0,172.29.74.65
	I0706 20:59:59.325491    8620 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0706 20:59:59.325506    8620 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0706 20:59:59.325506    8620 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0706 20:59:59.325549    8620 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0706 20:59:59.325576    8620 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0706 20:59:59.325590    8620 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0706 20:59:59.325613    8620 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0706 20:59:59.325613    8620 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0706 20:59:59.325641    8620 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0706 20:59:59.325641    8620 command_runner.go:130] > ExecStart=
	I0706 20:59:59.325641    8620 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0706 20:59:59.325695    8620 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0706 20:59:59.325695    8620 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0706 20:59:59.325734    8620 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0706 20:59:59.325734    8620 command_runner.go:130] > LimitNOFILE=infinity
	I0706 20:59:59.325734    8620 command_runner.go:130] > LimitNPROC=infinity
	I0706 20:59:59.325734    8620 command_runner.go:130] > LimitCORE=infinity
	I0706 20:59:59.325734    8620 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0706 20:59:59.325734    8620 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0706 20:59:59.325785    8620 command_runner.go:130] > TasksMax=infinity
	I0706 20:59:59.325785    8620 command_runner.go:130] > TimeoutStartSec=0
	I0706 20:59:59.325785    8620 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0706 20:59:59.325836    8620 command_runner.go:130] > Delegate=yes
	I0706 20:59:59.325870    8620 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0706 20:59:59.325870    8620 command_runner.go:130] > KillMode=process
	I0706 20:59:59.325870    8620 command_runner.go:130] > [Install]
	I0706 20:59:59.325870    8620 command_runner.go:130] > WantedBy=multi-user.target
	I0706 20:59:59.335953    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 20:59:59.363614    8620 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 20:59:59.395840    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 20:59:59.424342    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 20:59:59.452946    8620 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0706 20:59:59.508701    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 20:59:59.527058    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 20:59:59.551907    8620 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0706 20:59:59.562480    8620 ssh_runner.go:195] Run: which cri-dockerd
	I0706 20:59:59.567095    8620 command_runner.go:130] > /usr/bin/cri-dockerd
	I0706 20:59:59.575660    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 20:59:59.590178    8620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 20:59:59.623902    8620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 20:59:59.781544    8620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 20:59:59.927489    8620 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 20:59:59.927674    8620 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 20:59:59.970135    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:00:00.131903    8620 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 21:00:01.733029    8620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6011144s)
	I0706 21:00:01.742693    8620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 21:00:01.896213    8620 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0706 21:00:02.063074    8620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 21:00:02.223538    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:00:02.373872    8620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0706 21:00:02.408044    8620 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:00:02.559668    8620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0706 21:00:02.660608    8620 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0706 21:00:02.670532    8620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0706 21:00:02.678054    8620 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0706 21:00:02.678137    8620 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0706 21:00:02.678137    8620 command_runner.go:130] > Device: 16h/22d	Inode: 973         Links: 1
	I0706 21:00:02.678137    8620 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0706 21:00:02.678204    8620 command_runner.go:130] > Access: 2023-07-06 21:00:02.580702386 +0000
	I0706 21:00:02.678204    8620 command_runner.go:130] > Modify: 2023-07-06 21:00:02.580702386 +0000
	I0706 21:00:02.678204    8620 command_runner.go:130] > Change: 2023-07-06 21:00:02.583702587 +0000
	I0706 21:00:02.678204    8620 command_runner.go:130] >  Birth: -
	I0706 21:00:02.678295    8620 start.go:534] Will wait 60s for crictl version
	I0706 21:00:02.687817    8620 ssh_runner.go:195] Run: which crictl
	I0706 21:00:02.692963    8620 command_runner.go:130] > /usr/bin/crictl
	I0706 21:00:02.701730    8620 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0706 21:00:02.751394    8620 command_runner.go:130] > Version:  0.1.0
	I0706 21:00:02.751394    8620 command_runner.go:130] > RuntimeName:  docker
	I0706 21:00:02.751394    8620 command_runner.go:130] > RuntimeVersion:  24.0.2
	I0706 21:00:02.751394    8620 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0706 21:00:02.751394    8620 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0706 21:00:02.758056    8620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 21:00:02.786639    8620 command_runner.go:130] > 24.0.2
	I0706 21:00:02.793203    8620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 21:00:02.825354    8620 command_runner.go:130] > 24.0.2
	I0706 21:00:02.830532    8620 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0706 21:00:02.833067    8620 out.go:177]   - env NO_PROXY=172.29.78.0
	I0706 21:00:02.835538    8620 out.go:177]   - env NO_PROXY=172.29.78.0,172.29.74.65
	I0706 21:00:02.838006    8620 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0706 21:00:02.841392    8620 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0706 21:00:02.841392    8620 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0706 21:00:02.841392    8620 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0706 21:00:02.841392    8620 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:76:79 Flags:up|broadcast|multicast|running}
	I0706 21:00:02.844379    8620 ip.go:210] interface addr: fe80::9492:57c6:5513:d3cc/64
	I0706 21:00:02.844379    8620 ip.go:210] interface addr: 172.29.64.1/20
	I0706 21:00:02.852004    8620 ssh_runner.go:195] Run: grep 172.29.64.1	host.minikube.internal$ /etc/hosts
	I0706 21:00:02.857327    8620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 21:00:02.874597    8620 certs.go:56] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-144300 for IP: 172.29.78.173
	I0706 21:00:02.874667    8620 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 21:00:02.874730    8620 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0706 21:00:02.875616    8620 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0706 21:00:02.875822    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0706 21:00:02.876132    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0706 21:00:02.876358    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0706 21:00:02.876600    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0706 21:00:02.877243    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem (1338 bytes)
	W0706 21:00:02.877596    8620 certs.go:433] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256_empty.pem, impossibly tiny 0 bytes
	I0706 21:00:02.877740    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0706 21:00:02.878410    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0706 21:00:02.878754    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0706 21:00:02.879171    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0706 21:00:02.880064    8620 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem (1708 bytes)
	I0706 21:00:02.880449    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /usr/share/ca-certificates/82562.pem
	I0706 21:00:02.880611    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0706 21:00:02.880891    8620 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem -> /usr/share/ca-certificates/8256.pem
	I0706 21:00:02.882975    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0706 21:00:02.919151    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0706 21:00:02.952456    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0706 21:00:02.989563    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0706 21:00:03.025881    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /usr/share/ca-certificates/82562.pem (1708 bytes)
	I0706 21:00:03.064051    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0706 21:00:03.099369    8620 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem --> /usr/share/ca-certificates/8256.pem (1338 bytes)
	I0706 21:00:03.146716    8620 ssh_runner.go:195] Run: openssl version
	I0706 21:00:03.154816    8620 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0706 21:00:03.163655    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82562.pem && ln -fs /usr/share/ca-certificates/82562.pem /etc/ssl/certs/82562.pem"
	I0706 21:00:03.186924    8620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82562.pem
	I0706 21:00:03.192840    8620 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 21:00:03.192840    8620 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 21:00:03.202047    8620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82562.pem
	I0706 21:00:03.209600    8620 command_runner.go:130] > 3ec20f2e
	I0706 21:00:03.219931    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/82562.pem /etc/ssl/certs/3ec20f2e.0"
	I0706 21:00:03.243793    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0706 21:00:03.272776    8620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0706 21:00:03.278757    8620 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 21:00:03.278757    8620 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 21:00:03.289950    8620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0706 21:00:03.297018    8620 command_runner.go:130] > b5213941
	I0706 21:00:03.306163    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0706 21:00:03.330032    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8256.pem && ln -fs /usr/share/ca-certificates/8256.pem /etc/ssl/certs/8256.pem"
	I0706 21:00:03.354933    8620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8256.pem
	I0706 21:00:03.360307    8620 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 21:00:03.360307    8620 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 21:00:03.369590    8620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8256.pem
	I0706 21:00:03.376329    8620 command_runner.go:130] > 51391683
	I0706 21:00:03.385478    8620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8256.pem /etc/ssl/certs/51391683.0"
	I0706 21:00:03.410289    8620 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0706 21:00:03.414453    8620 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 21:00:03.415522    8620 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0706 21:00:03.423859    8620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0706 21:00:03.457837    8620 command_runner.go:130] > cgroupfs
	I0706 21:00:03.457994    8620 cni.go:84] Creating CNI manager for ""
	I0706 21:00:03.457994    8620 cni.go:137] 3 nodes found, recommending kindnet
	I0706 21:00:03.457994    8620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0706 21:00:03.457994    8620 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.78.173 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-144300 NodeName:multinode-144300-m03 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.78.0"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.78.173 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0706 21:00:03.457994    8620 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.78.173
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-144300-m03"
	  kubeletExtraArgs:
	    node-ip: 172.29.78.173
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.78.0"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0706 21:00:03.457994    8620 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=multinode-144300-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.78.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0706 21:00:03.467882    8620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0706 21:00:03.482335    8620 command_runner.go:130] > kubeadm
	I0706 21:00:03.482406    8620 command_runner.go:130] > kubectl
	I0706 21:00:03.482406    8620 command_runner.go:130] > kubelet
	I0706 21:00:03.482406    8620 binaries.go:44] Found k8s binaries, skipping transfer
	I0706 21:00:03.489981    8620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0706 21:00:03.503249    8620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0706 21:00:03.526748    8620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0706 21:00:03.558130    8620 ssh_runner.go:195] Run: grep 172.29.78.0	control-plane.minikube.internal$ /etc/hosts
	I0706 21:00:03.563448    8620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.78.0	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 21:00:03.579646    8620 host.go:66] Checking if "multinode-144300" exists ...
	I0706 21:00:03.579906    8620 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:00:03.579906    8620 start.go:301] JoinCluster: &{Name:multinode-144300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.27.3 ClusterName:multinode-144300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.78.0 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.74.65 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.78.173 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 21:00:03.579906    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0706 21:00:03.579906    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 21:00:04.251602    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:00:04.251893    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:00:04.251893    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:00:05.245473    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 21:00:05.245644    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:00:05.246000    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 21:00:05.433085    8620 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wcntnx.2554jyynsugk1kmw --discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d 
	I0706 21:00:05.433085    8620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm token create --print-join-command --ttl=0": (1.8531649s)
	I0706 21:00:05.433085    8620 start.go:314] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.29.78.173 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0706 21:00:05.433085    8620 host.go:66] Checking if "multinode-144300" exists ...
	I0706 21:00:05.446907    8620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-144300-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0706 21:00:05.446907    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 21:00:06.109402    8620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:00:06.109402    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:00:06.109536    8620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:00:07.102288    8620 main.go:141] libmachine: [stdout =====>] : 172.29.78.0
	
	I0706 21:00:07.102499    8620 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:00:07.102825    8620 sshutil.go:53] new ssh client: &{IP:172.29.78.0 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 21:00:07.274364    8620 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0706 21:00:07.339594    8620 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-jhjpn, kube-system/kube-proxy-x7bwf
	I0706 21:00:07.341622    8620 command_runner.go:130] > node/multinode-144300-m03 cordoned
	I0706 21:00:07.341622    8620 command_runner.go:130] > node/multinode-144300-m03 drained
	I0706 21:00:07.341622    8620 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl drain multinode-144300-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (1.894701s)
	I0706 21:00:07.341622    8620 node.go:108] successfully drained node "m03"
	I0706 21:00:07.341622    8620 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:00:07.342617    8620 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.78.0:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 21:00:07.343591    8620 request.go:1188] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0706 21:00:07.343591    8620 round_trippers.go:463] DELETE https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:07.343591    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:07.343591    8620 round_trippers.go:473]     Content-Type: application/json
	I0706 21:00:07.343591    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:07.343591    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:07.351619    8620 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0706 21:00:07.351799    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:07.351799    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:07.351799    8620 round_trippers.go:580]     Content-Length: 171
	I0706 21:00:07.351799    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:07 GMT
	I0706 21:00:07.351799    8620 round_trippers.go:580]     Audit-Id: 5b97bfb1-f7ab-401f-9fac-ca5526710406
	I0706 21:00:07.351799    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:07.351799    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:07.351799    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:07.351962    8620 request.go:1188] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-144300-m03","kind":"nodes","uid":"9147f70e-3f8f-4f6c-98f8-6e9530ca9678"}}
	I0706 21:00:07.351992    8620 node.go:124] successfully deleted node "m03"
	I0706 21:00:07.351992    8620 start.go:318] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.29.78.173 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0706 21:00:07.352062    8620 start.go:322] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.29.78.173 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0706 21:00:07.352135    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wcntnx.2554jyynsugk1kmw --discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-144300-m03"
	I0706 21:00:07.693100    8620 command_runner.go:130] ! W0706 21:00:07.694924    1337 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0706 21:00:08.371383    8620 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0706 21:00:10.147867    8620 command_runner.go:130] > [preflight] Running pre-flight checks
	I0706 21:00:10.147867    8620 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0706 21:00:10.147867    8620 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0706 21:00:10.148010    8620 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0706 21:00:10.148010    8620 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0706 21:00:10.148010    8620 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0706 21:00:10.148010    8620 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0706 21:00:10.148010    8620 command_runner.go:130] > This node has joined the cluster:
	I0706 21:00:10.148010    8620 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0706 21:00:10.148010    8620 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0706 21:00:10.148010    8620 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0706 21:00:10.148128    8620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wcntnx.2554jyynsugk1kmw --discovery-token-ca-cert-hash sha256:801bc493e18c108e97245d7c598f2964e6d9529fbb4b1b58fd14b2c4e1eaad5d --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-144300-m03": (2.7959726s)
	I0706 21:00:10.148178    8620 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0706 21:00:10.311319    8620 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0706 21:00:10.467096    8620 start.go:303] JoinCluster complete in 6.8870677s
	I0706 21:00:10.467166    8620 cni.go:84] Creating CNI manager for ""
	I0706 21:00:10.467166    8620 cni.go:137] 3 nodes found, recommending kindnet
	I0706 21:00:10.476500    8620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0706 21:00:10.483568    8620 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0706 21:00:10.483568    8620 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0706 21:00:10.483568    8620 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0706 21:00:10.483568    8620 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0706 21:00:10.483568    8620 command_runner.go:130] > Access: 2023-07-06 20:56:55.006854400 +0000
	I0706 21:00:10.483568    8620 command_runner.go:130] > Modify: 2023-06-30 22:28:30.000000000 +0000
	I0706 21:00:10.483568    8620 command_runner.go:130] > Change: 2023-07-06 20:56:46.220000000 +0000
	I0706 21:00:10.483568    8620 command_runner.go:130] >  Birth: -
	I0706 21:00:10.483568    8620 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0706 21:00:10.483568    8620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0706 21:00:10.520733    8620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0706 21:00:10.915828    8620 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0706 21:00:10.915896    8620 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0706 21:00:10.915896    8620 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0706 21:00:10.915896    8620 command_runner.go:130] > daemonset.apps/kindnet configured
	I0706 21:00:10.917238    8620 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:00:10.917967    8620 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.78.0:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 21:00:10.918493    8620 round_trippers.go:463] GET https://172.29.78.0:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0706 21:00:10.919022    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:10.919022    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:10.919095    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:10.921985    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 21:00:10.921985    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:10.922700    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:10.922700    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:10.922700    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:10.922737    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:10.922737    8620 round_trippers.go:580]     Content-Length: 292
	I0706 21:00:10.922737    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:10 GMT
	I0706 21:00:10.922737    8620 round_trippers.go:580]     Audit-Id: 480f5713-606a-4bb1-a139-f937a1a213d0
	I0706 21:00:10.922737    8620 request.go:1188] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"b60ac185-d6d8-49e6-a9b3-f1d59eb7807f","resourceVersion":"1246","creationTimestamp":"2023-07-06T20:46:35Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0706 21:00:10.923039    8620 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-144300" context rescaled to 1 replicas
	I0706 21:00:10.923116    8620 start.go:223] Will wait 6m0s for node &{Name:m03 IP:172.29.78.173 Port:0 KubernetesVersion:v1.27.3 ContainerRuntime: ControlPlane:false Worker:true}
	I0706 21:00:10.928380    8620 out.go:177] * Verifying Kubernetes components...
	I0706 21:00:10.938829    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 21:00:10.962471    8620 loader.go:373] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:00:10.963741    8620 kapi.go:59] client config for multinode-144300: &rest.Config{Host:"https://172.29.78.0:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-144300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 21:00:10.964302    8620 node_ready.go:35] waiting up to 6m0s for node "multinode-144300-m03" to be "Ready" ...
	I0706 21:00:10.964890    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:10.964921    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:10.964921    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:10.964921    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:10.968561    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:10.968685    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:10.968685    8620 round_trippers.go:580]     Content-Length: 4050
	I0706 21:00:10.968685    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:10 GMT
	I0706 21:00:10.968685    8620 round_trippers.go:580]     Audit-Id: 18bba0eb-aa72-40d8-874d-32b9492a776a
	I0706 21:00:10.968776    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:10.968776    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:10.968776    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:10.968776    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:10.968939    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1475","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fields
Type":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:v [truncated 3026 chars]
	I0706 21:00:11.479576    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:11.479576    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:11.479576    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:11.479576    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:11.485362    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 21:00:11.485362    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:11.485454    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:11 GMT
	I0706 21:00:11.485454    8620 round_trippers.go:580]     Audit-Id: 64ff12de-0322-492d-b8f0-e465fe1fb8bc
	I0706 21:00:11.485480    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:11.485480    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:11.485511    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:11.485511    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:11.488196    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:11.980762    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:11.980854    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:11.980854    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:11.980854    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:11.984775    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:11.984842    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:11.984842    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:11.984899    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:11.984899    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:11 GMT
	I0706 21:00:11.984899    8620 round_trippers.go:580]     Audit-Id: 722afd5a-c2e2-49c8-95f8-a846388f0b06
	I0706 21:00:11.984974    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:11.984974    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:11.985348    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:12.483757    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:12.483757    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:12.483757    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:12.483757    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:12.487571    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:12.487571    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:12.487571    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:12.487571    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:12 GMT
	I0706 21:00:12.487571    8620 round_trippers.go:580]     Audit-Id: acfd7252-177f-4ada-9eb9-92ccd28b2730
	I0706 21:00:12.487571    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:12.487571    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:12.487571    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:12.487766    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:12.983705    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:12.983705    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:12.983705    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:12.983705    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:12.988155    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 21:00:12.988155    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:12.988155    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:12 GMT
	I0706 21:00:12.988155    8620 round_trippers.go:580]     Audit-Id: 26a36308-8619-4f1a-9ddd-e39f39713b44
	I0706 21:00:12.988155    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:12.988155    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:12.988155    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:12.988155    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:12.989155    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:12.989155    8620 node_ready.go:58] node "multinode-144300-m03" has status "Ready":"False"
	I0706 21:00:13.469706    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:13.469706    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:13.469706    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:13.469706    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:13.473558    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:13.474484    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:13.474484    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:13.474484    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:13 GMT
	I0706 21:00:13.474484    8620 round_trippers.go:580]     Audit-Id: 832acb28-65a4-44f2-b274-cff08c3b29cb
	I0706 21:00:13.474484    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:13.474484    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:13.474567    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:13.474626    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:13.983744    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:13.983744    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:13.983744    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:13.983744    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:13.988402    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 21:00:13.988402    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:13.988402    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:13.988402    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:13.988649    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:13.988649    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:13 GMT
	I0706 21:00:13.988649    8620 round_trippers.go:580]     Audit-Id: 61a901dc-4150-4d4f-8688-0575138e767c
	I0706 21:00:13.988649    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:13.988909    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:14.483864    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:14.483864    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:14.483965    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:14.483965    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:14.487928    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:14.488061    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:14.488061    8620 round_trippers.go:580]     Audit-Id: d717e0be-5c6c-41e5-a559-dcc883942b3f
	I0706 21:00:14.488061    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:14.488061    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:14.488061    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:14.488061    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:14.488165    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:14 GMT
	I0706 21:00:14.488466    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:14.984176    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:14.984176    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:14.984176    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:14.984176    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:14.989908    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 21:00:14.989971    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:14.989971    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:14.990022    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:14 GMT
	I0706 21:00:14.990022    8620 round_trippers.go:580]     Audit-Id: 6ae35eeb-9dc7-473e-bb36-3fad9a5d9c6d
	I0706 21:00:14.990051    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:14.990051    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:14.990051    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:14.990889    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:14.991438    8620 node_ready.go:58] node "multinode-144300-m03" has status "Ready":"False"
	I0706 21:00:15.483455    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:15.483455    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:15.483455    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:15.483455    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:15.489177    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 21:00:15.489177    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:15.489177    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:15.489177    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:15.489177    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:15.489177    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:15 GMT
	I0706 21:00:15.489177    8620 round_trippers.go:580]     Audit-Id: d261ef43-a74a-414a-a31c-4ae401916093
	I0706 21:00:15.489177    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:15.489177    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:15.980270    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:15.980270    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:15.980270    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:15.980270    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:15.985336    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 21:00:15.985450    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:15.985450    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:15.985450    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:15.985450    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:15.985450    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:15.985598    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:15 GMT
	I0706 21:00:15.985598    8620 round_trippers.go:580]     Audit-Id: eaccf0eb-5c6f-4377-bec4-58e08ab68b00
	I0706 21:00:15.985791    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:16.478820    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:16.478930    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:16.478930    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:16.478930    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:16.482844    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:16.483057    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:16.483131    8620 round_trippers.go:580]     Audit-Id: a5fa08f5-12a6-43d4-a2e5-aa23d631c668
	I0706 21:00:16.483131    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:16.483131    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:16.483131    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:16.483131    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:16.483131    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:16 GMT
	I0706 21:00:16.483131    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:16.979727    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:16.979831    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:16.979831    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:16.979831    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:16.983508    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:16.983508    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:16.983508    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:16.984514    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:16.984514    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:16.984514    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:16 GMT
	I0706 21:00:16.984514    8620 round_trippers.go:580]     Audit-Id: 9dc9b4cb-b9aa-4250-8cbf-65738054ade5
	I0706 21:00:16.984514    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:16.984598    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:17.480118    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:17.480118    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:17.480118    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:17.480118    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:17.484085    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:17.484085    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:17.484564    8620 round_trippers.go:580]     Audit-Id: 0b5ca36a-5997-42fe-a234-d7a3f785636f
	I0706 21:00:17.484564    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:17.484639    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:17.484639    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:17.484639    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:17.484639    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:17 GMT
	I0706 21:00:17.484987    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:17.485675    8620 node_ready.go:58] node "multinode-144300-m03" has status "Ready":"False"
	I0706 21:00:17.980407    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:17.980407    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:17.980543    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:17.980543    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:17.983896    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:17.983896    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:17.983896    8620 round_trippers.go:580]     Audit-Id: 7e6078a5-cb5f-4b30-a16e-6a175fd68269
	I0706 21:00:17.984883    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:17.984883    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:17.984883    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:17.984883    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:17.984883    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:17 GMT
	I0706 21:00:17.985117    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:18.478495    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:18.478495    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:18.478581    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:18.478581    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:18.481954    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:18.482074    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:18.482074    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:18 GMT
	I0706 21:00:18.482156    8620 round_trippers.go:580]     Audit-Id: e1b85f1b-375f-4623-8d59-00826e0a1316
	I0706 21:00:18.482156    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:18.482238    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:18.482238    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:18.482278    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:18.482468    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:18.974697    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:18.974697    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:18.974697    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:18.974697    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:18.979345    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 21:00:18.979345    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:18.979345    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:18 GMT
	I0706 21:00:18.979345    8620 round_trippers.go:580]     Audit-Id: 92ff1f26-5062-45b8-adbf-a46b0b5b5b8e
	I0706 21:00:18.979345    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:18.979345    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:18.979589    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:18.979589    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:18.979843    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:19.479281    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:19.479388    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:19.479388    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:19.479388    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:19.482760    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:19.483283    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:19.483283    8620 round_trippers.go:580]     Audit-Id: 8fa45bc6-d838-48d8-9c15-6bac8b436a17
	I0706 21:00:19.483283    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:19.483283    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:19.483283    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:19.483389    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:19.483389    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:19 GMT
	I0706 21:00:19.483753    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:19.978336    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:19.978336    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:19.978468    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:19.978468    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:19.983489    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 21:00:19.983580    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:19.983580    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:19.983644    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:19.983644    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:19.983644    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:19.983644    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:19 GMT
	I0706 21:00:19.983740    8620 round_trippers.go:580]     Audit-Id: 7b74ec3f-5a22-48ce-b1e2-fa1ddf115d7e
	I0706 21:00:19.983768    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:19.984303    8620 node_ready.go:58] node "multinode-144300-m03" has status "Ready":"False"
	I0706 21:00:20.483301    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:20.483301    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:20.483301    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:20.483301    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:20.486933    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:20.487642    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:20.487642    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:20.487642    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:20 GMT
	I0706 21:00:20.487642    8620 round_trippers.go:580]     Audit-Id: 91d25315-eebf-4e34-8f46-4565a890b7f6
	I0706 21:00:20.487642    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:20.487642    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:20.487798    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:20.488012    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:20.985743    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:20.985872    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:20.985872    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:20.985872    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:20.990558    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 21:00:20.990558    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:20.990679    8620 round_trippers.go:580]     Audit-Id: 69d3cac2-3bb8-4f1c-88a4-fea35d59305c
	I0706 21:00:20.990729    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:20.990729    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:20.990816    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:20.990816    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:20.990816    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:20 GMT
	I0706 21:00:20.991201    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:21.474872    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:21.474872    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:21.474872    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:21.474872    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:21.483960    8620 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0706 21:00:21.483960    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:21.483960    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:21.483960    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:21.483960    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:21.483960    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:21.483960    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:21 GMT
	I0706 21:00:21.483960    8620 round_trippers.go:580]     Audit-Id: 1ab734dc-25fd-4558-b867-1038d3e25bbe
	I0706 21:00:21.484866    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:21.974466    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:21.974542    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:21.974542    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:21.974600    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:21.977429    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 21:00:21.978482    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:21.978482    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:21.978482    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:21.978482    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:21.978482    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:21 GMT
	I0706 21:00:21.978482    8620 round_trippers.go:580]     Audit-Id: 8d21ce57-40c2-47e6-b73a-e8c516c440fc
	I0706 21:00:21.978554    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:21.978700    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:22.473514    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:22.473514    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:22.473514    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:22.473514    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:22.477293    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:22.477293    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:22.477293    8620 round_trippers.go:580]     Audit-Id: bcd4c328-528c-480d-b137-ff37adb2517f
	I0706 21:00:22.477293    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:22.478336    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:22.478336    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:22.478336    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:22.478336    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:22 GMT
	I0706 21:00:22.478562    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:22.478958    8620 node_ready.go:58] node "multinode-144300-m03" has status "Ready":"False"
	I0706 21:00:22.973260    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:22.973260    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:22.973260    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:22.973260    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:22.977187    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:22.977476    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:22.977476    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:22 GMT
	I0706 21:00:22.977476    8620 round_trippers.go:580]     Audit-Id: 7737bf8f-f2ec-4ea8-996f-6af05b23d8f9
	I0706 21:00:22.977558    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:22.977558    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:22.977558    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:22.977558    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:22.977803    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:23.475451    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:23.475451    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:23.475451    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:23.475451    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:23.478990    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 21:00:23.478990    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:23.479048    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:23.479048    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:23 GMT
	I0706 21:00:23.479048    8620 round_trippers.go:580]     Audit-Id: 22a431d0-d3b9-43b4-bb5e-7f9020502ac7
	I0706 21:00:23.479048    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:23.479048    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:23.479048    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:23.479542    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:23.975825    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:23.975825    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:23.975939    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:23.975939    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:23.979305    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:23.979305    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:23.979305    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:23.979305    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:23.979305    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:23.979305    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:23 GMT
	I0706 21:00:23.979905    8620 round_trippers.go:580]     Audit-Id: 8d39de1f-aba1-4919-9953-f8f776026f16
	I0706 21:00:23.979905    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:23.980218    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:24.482779    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:24.482779    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:24.482779    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:24.482779    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:24.486577    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:24.486955    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:24.486955    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:24.486955    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:24.486955    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:24.486955    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:24.486955    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:24 GMT
	I0706 21:00:24.486955    8620 round_trippers.go:580]     Audit-Id: c0423aee-589f-4bc8-a98c-d8ac603e02b6
	I0706 21:00:24.487143    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:24.487143    8620 node_ready.go:58] node "multinode-144300-m03" has status "Ready":"False"
	I0706 21:00:24.971197    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:24.971267    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:24.971267    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:24.971267    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:24.985761    8620 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0706 21:00:24.985761    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:24.985761    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:24.985918    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:24.985918    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:24.985918    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:24 GMT
	I0706 21:00:24.985918    8620 round_trippers.go:580]     Audit-Id: 567907f3-8cbe-460b-8fbf-112a39ce78df
	I0706 21:00:24.985918    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:24.986661    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:25.473071    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:25.473161    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:25.473161    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:25.473161    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:25.477556    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 21:00:25.478387    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:25.478387    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:25.478387    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:25.478578    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:25.478578    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:25 GMT
	I0706 21:00:25.478686    8620 round_trippers.go:580]     Audit-Id: a677faf3-b95e-45d2-ba64-56f3b99cb1bb
	I0706 21:00:25.478686    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:25.478994    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:25.974080    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:25.974080    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:25.974080    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:25.974080    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:25.981166    8620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0706 21:00:25.981166    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:25.981242    8620 round_trippers.go:580]     Audit-Id: 1ad70d7d-9275-464b-9c0d-2866d3bcbfbd
	I0706 21:00:25.981242    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:25.981242    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:25.981242    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:25.981242    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:25.981302    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:25 GMT
	I0706 21:00:25.981494    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:26.473424    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:26.473532    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:26.473532    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:26.473532    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:26.478230    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 21:00:26.478306    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:26.478306    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:26.478306    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:26.478306    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:26.478306    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:26.478306    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:26 GMT
	I0706 21:00:26.478306    8620 round_trippers.go:580]     Audit-Id: 0f84f19d-2ac4-4626-a0ce-5830dfb9a4c2
	I0706 21:00:26.478475    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:26.976566    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:26.976725    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:26.976725    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:26.976725    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:26.980388    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:26.980798    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:26.980798    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:26.980798    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:26 GMT
	I0706 21:00:26.980798    8620 round_trippers.go:580]     Audit-Id: f742c6cc-0335-46d7-94b3-a2f485137628
	I0706 21:00:26.980945    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:26.980945    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:26.980945    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:26.981169    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:26.981900    8620 node_ready.go:58] node "multinode-144300-m03" has status "Ready":"False"
	I0706 21:00:27.480064    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:27.480064    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:27.480064    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:27.480064    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:27.483651    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:27.483651    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:27.483651    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:27.484666    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:27 GMT
	I0706 21:00:27.484666    8620 round_trippers.go:580]     Audit-Id: aec1e102-cb74-4512-a7a1-4218445b6757
	I0706 21:00:27.484666    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:27.484666    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:27.484717    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:27.484810    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1478","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 3135 chars]
	I0706 21:00:27.979371    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:27.979428    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:27.979428    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:27.979428    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:27.985077    8620 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0706 21:00:27.985585    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:27.985585    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:27.985585    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:27.985585    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:27.985585    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:27.985678    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:27 GMT
	I0706 21:00:27.985678    8620 round_trippers.go:580]     Audit-Id: b44bbd12-7bf4-4940-a246-951757a98dcb
	I0706 21:00:27.985880    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1509","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3204 chars]
	I0706 21:00:27.986065    8620 node_ready.go:49] node "multinode-144300-m03" has status "Ready":"True"
	I0706 21:00:27.986065    8620 node_ready.go:38] duration metric: took 17.0216396s waiting for node "multinode-144300-m03" to be "Ready" ...
	I0706 21:00:27.986065    8620 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 21:00:27.986065    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods
	I0706 21:00:27.986065    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:27.986065    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:27.986065    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:27.994663    8620 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0706 21:00:27.994663    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:27.994663    8620 round_trippers.go:580]     Audit-Id: a6a7dbfb-b800-426f-890a-ecc6e1558266
	I0706 21:00:27.994663    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:27.994663    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:27.995532    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:27.995582    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:27.995582    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:27 GMT
	I0706 21:00:27.996419    8620 request.go:1188] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1509"},"items":[{"metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1242","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 82842 chars]
	I0706 21:00:28.001311    8620 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.001515    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/coredns-5d78c9869d-m7j99
	I0706 21:00:28.001515    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.001591    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.001614    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.004169    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 21:00:28.004169    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.004169    8620 round_trippers.go:580]     Audit-Id: 63256a0e-5d17-4912-87e6-ed96c8da21d3
	I0706 21:00:28.004169    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.004169    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.004169    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.004169    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.004169    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.004169    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5d78c9869d-m7j99","generateName":"coredns-5d78c9869d-","namespace":"kube-system","uid":"dfa019d5-9528-4f25-8aab-03d1d276bb0c","resourceVersion":"1242","creationTimestamp":"2023-07-06T20:46:48Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5d78c9869d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5d78c9869d","uid":"ad297177-8eb2-413c-a2ee-6b7462392400","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ad297177-8eb2-413c-a2ee-6b7462392400\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6491 chars]
	I0706 21:00:28.004857    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 21:00:28.004857    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.004857    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.004857    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.007470    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 21:00:28.007470    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.007470    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.007470    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.007470    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.007470    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.007470    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.007470    8620 round_trippers.go:580]     Audit-Id: 0fbf56ed-8d6b-4e6b-a4c4-db1280d13fc7
	I0706 21:00:28.007470    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 21:00:28.008443    8620 pod_ready.go:92] pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace has status "Ready":"True"
	I0706 21:00:28.008443    8620 pod_ready.go:81] duration metric: took 7.0748ms waiting for pod "coredns-5d78c9869d-m7j99" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.008637    8620 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.008715    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-144300
	I0706 21:00:28.008715    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.008715    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.008715    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.012543    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:28.012543    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.012543    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.012658    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.012658    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.012658    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.012658    8620 round_trippers.go:580]     Audit-Id: 1cb38f71-bad6-48d1-bcc6-9ad53c8672b4
	I0706 21:00:28.012658    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.012823    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-144300","namespace":"kube-system","uid":"3cf71374-8b9f-4bee-a5a7-538dcf09ed5e","resourceVersion":"1211","creationTimestamp":"2023-07-06T20:57:34Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.78.0:2379","kubernetes.io/config.hash":"fcb31617214a528ab159c24a1103b7af","kubernetes.io/config.mirror":"fcb31617214a528ab159c24a1103b7af","kubernetes.io/config.seen":"2023-07-06T20:57:27.010845433Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5843 chars]
	I0706 21:00:28.013063    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 21:00:28.013063    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.013063    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.013063    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.015644    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 21:00:28.015644    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.015644    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.016459    8620 round_trippers.go:580]     Audit-Id: 44f7bf35-dbda-4b6c-848a-973824148e86
	I0706 21:00:28.016459    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.016459    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.016459    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.016459    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.016818    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 21:00:28.017375    8620 pod_ready.go:92] pod "etcd-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:00:28.017375    8620 pod_ready.go:81] duration metric: took 8.7376ms waiting for pod "etcd-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.017375    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.017375    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-144300
	I0706 21:00:28.017375    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.017375    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.017375    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.022153    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 21:00:28.022153    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.022234    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.022234    8620 round_trippers.go:580]     Audit-Id: 886ee9ea-6ad1-4df8-b36a-c8fd0a398023
	I0706 21:00:28.022234    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.022271    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.022271    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.022294    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.022326    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-144300","namespace":"kube-system","uid":"c3e05753-1404-4779-b0dd-d7bf63b44bdd","resourceVersion":"1205","creationTimestamp":"2023-07-06T20:57:34Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.78.0:8443","kubernetes.io/config.hash":"39dd3fec037ddc9365ef4418fd161ea0","kubernetes.io/config.mirror":"39dd3fec037ddc9365ef4418fd161ea0","kubernetes.io/config.seen":"2023-07-06T20:57:27.010850733Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:57:34Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7382 chars]
	I0706 21:00:28.023056    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 21:00:28.023084    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.023084    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.023084    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.026540    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:28.026540    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.026540    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.026540    8620 round_trippers.go:580]     Audit-Id: 44858804-3e3e-46c9-8022-823a0d5351d7
	I0706 21:00:28.026540    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.026540    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.026540    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.026540    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.026540    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 21:00:28.027218    8620 pod_ready.go:92] pod "kube-apiserver-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:00:28.027239    8620 pod_ready.go:81] duration metric: took 9.8638ms waiting for pod "kube-apiserver-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.027239    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.027304    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-144300
	I0706 21:00:28.027304    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.027304    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.027304    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.030012    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 21:00:28.030012    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.030012    8620 round_trippers.go:580]     Audit-Id: 247e72d9-edb0-4a42-8584-60d633a3ab21
	I0706 21:00:28.030012    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.030012    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.030012    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.030795    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.030795    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.031064    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-144300","namespace":"kube-system","uid":"d9a60269-68e9-4ea2-82fe-63cedee225ef","resourceVersion":"1214","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.mirror":"8e49c2c2437113c0995d945e240b5b14","kubernetes.io/config.seen":"2023-07-06T20:46:36.035686687Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7167 chars]
	I0706 21:00:28.031492    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 21:00:28.031492    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.031492    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.031492    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.033934    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 21:00:28.034264    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.034264    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.034264    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.034329    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.034329    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.034329    8620 round_trippers.go:580]     Audit-Id: 5945dea5-d5f6-4cfd-a792-a019f6fc798d
	I0706 21:00:28.034329    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.034329    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 21:00:28.034911    8620 pod_ready.go:92] pod "kube-controller-manager-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:00:28.034911    8620 pod_ready.go:81] duration metric: took 7.672ms waiting for pod "kube-controller-manager-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.034911    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5vmt" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.180623    8620 request.go:628] Waited for 145.7108ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5vmt
	I0706 21:00:28.180890    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-f5vmt
	I0706 21:00:28.181046    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.181046    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.181046    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.183748    8620 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0706 21:00:28.183748    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.183748    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.183748    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.183748    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.184720    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.184720    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.184720    8620 round_trippers.go:580]     Audit-Id: 082d1845-7ff7-4df2-b430-405872712d00
	I0706 21:00:28.185002    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-f5vmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"e615de7b-b4a0-4060-aecd-0581b032227d","resourceVersion":"1361","creationTimestamp":"2023-07-06T20:48:24Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:48:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5743 chars]
	I0706 21:00:28.381278    8620 request.go:628] Waited for 195.1884ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 21:00:28.381378    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m02
	I0706 21:00:28.381378    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.381378    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.381378    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.385094    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:28.385709    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.385709    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.385709    8620 round_trippers.go:580]     Audit-Id: 2b574ac9-efca-45ca-a958-d52dc157cb82
	I0706 21:00:28.385709    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.385709    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.385709    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.385709    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.385888    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m02","uid":"fdd2ff3e-734f-4674-9b0c-c2ab273616c3","resourceVersion":"1387","creationTimestamp":"2023-07-06T20:58:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:58:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3252 chars]
	I0706 21:00:28.386219    8620 pod_ready.go:92] pod "kube-proxy-f5vmt" in "kube-system" namespace has status "Ready":"True"
	I0706 21:00:28.386219    8620 pod_ready.go:81] duration metric: took 351.3061ms waiting for pod "kube-proxy-f5vmt" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.386219    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.584933    8620 request.go:628] Waited for 198.3516ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 21:00:28.585060    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-h6h62
	I0706 21:00:28.585060    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.585060    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.585060    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.588330    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:28.589044    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.589044    8620 round_trippers.go:580]     Audit-Id: 9378ae5c-fa39-4014-be60-b380571501ef
	I0706 21:00:28.589044    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.589044    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.589044    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.589144    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.589144    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.589393    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-h6h62","generateName":"kube-proxy-","namespace":"kube-system","uid":"6949ff1e-f5c0-4ab2-ae7f-6b30775e220d","resourceVersion":"1170","creationTimestamp":"2023-07-06T20:46:49Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0706 21:00:28.788228    8620 request.go:628] Waited for 197.571ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 21:00:28.788482    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 21:00:28.788516    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.788545    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.788545    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.792148    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:28.792148    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.792148    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.792658    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.792658    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.792658    8620 round_trippers.go:580]     Audit-Id: 9b60c1ff-b6b2-4c06-a292-58ed65be82c4
	I0706 21:00:28.792658    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.792658    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.793050    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 21:00:28.793615    8620 pod_ready.go:92] pod "kube-proxy-h6h62" in "kube-system" namespace has status "Ready":"True"
	I0706 21:00:28.793691    8620 pod_ready.go:81] duration metric: took 407.4683ms waiting for pod "kube-proxy-h6h62" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.793691    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-x7bwf" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:28.992144    8620 request.go:628] Waited for 198.2854ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7bwf
	I0706 21:00:28.992239    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x7bwf
	I0706 21:00:28.992239    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:28.992364    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:28.992364    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:28.996769    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 21:00:28.996813    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:28.996813    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:28.996813    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:28 GMT
	I0706 21:00:28.996813    8620 round_trippers.go:580]     Audit-Id: 0fa0230d-5084-4a9f-be56-c121e6855393
	I0706 21:00:28.996813    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:28.996813    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:28.996813    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:28.996813    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-x7bwf","generateName":"kube-proxy-","namespace":"kube-system","uid":"3326b20f-277b-435c-8b7e-7d305167affb","resourceVersion":"1482","creationTimestamp":"2023-07-06T20:50:55Z","labels":{"controller-revision-hash":"56999f657b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"65be92af-a7ef-42bf-a2bd-37771c65f996","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:50:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"65be92af-a7ef-42bf-a2bd-37771c65f996\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5747 chars]
	I0706 21:00:29.181917    8620 request.go:628] Waited for 184.0832ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:29.182109    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300-m03
	I0706 21:00:29.182109    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:29.182109    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:29.182109    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:29.186000    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:29.186082    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:29.186082    8620 round_trippers.go:580]     Audit-Id: 87cdf65a-6285-4581-8af0-a407d638efb4
	I0706 21:00:29.186082    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:29.186082    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:29.186243    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:29.186243    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:29.186243    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:29 GMT
	I0706 21:00:29.186448    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300-m03","uid":"e62a7065-a874-4603-a0b5-631499da4630","resourceVersion":"1509","creationTimestamp":"2023-07-06T21:00:08Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T21:00:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":
{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","tim [truncated 3204 chars]
	I0706 21:00:29.186939    8620 pod_ready.go:92] pod "kube-proxy-x7bwf" in "kube-system" namespace has status "Ready":"True"
	I0706 21:00:29.187009    8620 pod_ready.go:81] duration metric: took 393.316ms waiting for pod "kube-proxy-x7bwf" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:29.187009    8620 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:29.385368    8620 request.go:628] Waited for 198.286ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 21:00:29.385904    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-144300
	I0706 21:00:29.385959    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:29.385959    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:29.385959    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:29.393285    8620 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0706 21:00:29.393846    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:29.393934    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:29.393934    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:29.393934    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:29.393934    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:29.393934    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:29 GMT
	I0706 21:00:29.393934    8620 round_trippers.go:580]     Audit-Id: 947e081a-8258-45df-bb8b-c56bff92d014
	I0706 21:00:29.393934    8620 request.go:1188] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-144300","namespace":"kube-system","uid":"70e904dd-fca0-436e-84d9-101fbc1cd9b0","resourceVersion":"1227","creationTimestamp":"2023-07-06T20:46:36Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.mirror":"ee7970f809cc50237f5ebbefc1799bf2","kubernetes.io/config.seen":"2023-07-06T20:46:36.035687887Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-07-06T20:46:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4897 chars]
	I0706 21:00:29.590729    8620 request.go:628] Waited for 195.7842ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 21:00:29.590729    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes/multinode-144300
	I0706 21:00:29.590729    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:29.590729    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:29.590729    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:29.594916    8620 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0706 21:00:29.594916    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:29.595116    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:29 GMT
	I0706 21:00:29.595116    8620 round_trippers.go:580]     Audit-Id: ebdf1d55-b4fb-479a-a076-43f10c910be8
	I0706 21:00:29.595116    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:29.595116    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:29.595200    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:29.595200    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:29.595467    8620 request.go:1188] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2023-07-06T20:46:32Z","fieldsType":"FieldsV1","f [truncated 5235 chars]
	I0706 21:00:29.595760    8620 pod_ready.go:92] pod "kube-scheduler-multinode-144300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:00:29.595760    8620 pod_ready.go:81] duration metric: took 408.7473ms waiting for pod "kube-scheduler-multinode-144300" in "kube-system" namespace to be "Ready" ...
	I0706 21:00:29.595760    8620 pod_ready.go:38] duration metric: took 1.6096826s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 21:00:29.595760    8620 system_svc.go:44] waiting for kubelet service to be running ....
	I0706 21:00:29.605439    8620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 21:00:29.625655    8620 system_svc.go:56] duration metric: took 29.895ms WaitForService to wait for kubelet.
	I0706 21:00:29.625655    8620 kubeadm.go:581] duration metric: took 18.7024025s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0706 21:00:29.625655    8620 node_conditions.go:102] verifying NodePressure condition ...
	I0706 21:00:29.792717    8620 request.go:628] Waited for 166.7109ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.78.0:8443/api/v1/nodes
	I0706 21:00:29.792969    8620 round_trippers.go:463] GET https://172.29.78.0:8443/api/v1/nodes
	I0706 21:00:29.792969    8620 round_trippers.go:469] Request Headers:
	I0706 21:00:29.792969    8620 round_trippers.go:473]     Accept: application/json, */*
	I0706 21:00:29.793201    8620 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0706 21:00:29.796521    8620 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0706 21:00:29.796521    8620 round_trippers.go:577] Response Headers:
	I0706 21:00:29.796521    8620 round_trippers.go:580]     Content-Type: application/json
	I0706 21:00:29.796617    8620 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: ac696ab1-a684-4293-9232-a8520ff04c4c
	I0706 21:00:29.796617    8620 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f3c15604-5218-4e03-beea-e6b0b8b3b03a
	I0706 21:00:29.796617    8620 round_trippers.go:580]     Date: Thu, 06 Jul 2023 21:00:29 GMT
	I0706 21:00:29.796617    8620 round_trippers.go:580]     Audit-Id: 56edf5df-dede-4929-af6f-6ecb8f54a656
	I0706 21:00:29.796617    8620 round_trippers.go:580]     Cache-Control: no-cache, private
	I0706 21:00:29.797075    8620 request.go:1188] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1512"},"items":[{"metadata":{"name":"multinode-144300","uid":"86009b5d-42ab-4828-bf6a-a70082397583","resourceVersion":"1209","creationTimestamp":"2023-07-06T20:46:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-144300","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4d384f293eb4d1ae13e8a16440afa4ec48ef3148","minikube.k8s.io/name":"multinode-144300","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_07_06T20_46_37_0700","minikube.k8s.io/version":"v1.30.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 13729 chars]
	I0706 21:00:29.797934    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 21:00:29.797934    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 21:00:29.798020    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 21:00:29.798020    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 21:00:29.798020    8620 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 21:00:29.798020    8620 node_conditions.go:123] node cpu capacity is 2
	I0706 21:00:29.798020    8620 node_conditions.go:105] duration metric: took 172.3642ms to run NodePressure ...
	I0706 21:00:29.798092    8620 start.go:228] waiting for startup goroutines ...
	I0706 21:00:29.798092    8620 start.go:242] writing updated cluster config ...
	I0706 21:00:29.807499    8620 ssh_runner.go:195] Run: rm -f paused
	I0706 21:00:29.983580    8620 start.go:642] kubectl: 1.18.2, cluster: 1.27.3 (minor skew: 9)
	I0706 21:00:29.986463    8620 out.go:177] 
	W0706 21:00:29.988622    8620 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.3.
	I0706 21:00:29.991442    8620 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0706 21:00:29.995206    8620 out.go:177] * Done! kubectl is now configured to use "multinode-144300" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-07-06 20:56:48 UTC, ends at Thu 2023-07-06 21:00:38 UTC. --
	Jul 06 20:57:50 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:50.233191963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:57:50 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:50.233750469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 20:57:50 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:50.234062972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:57:50 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:50.359421009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 20:57:50 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:50.359668511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:57:50 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:50.359841213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 20:57:50 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:50.359927414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:57:50 multinode-144300 cri-dockerd[1235]: time="2023-07-06T20:57:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd4cc280f084d7dbf77d6945271b9f48a0a7ee7537dab3cea6e263731ef4f7b0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 06 20:57:50 multinode-144300 cri-dockerd[1235]: time="2023-07-06T20:57:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f3b87c765a78be119e2c87751049b5b296738e67d02172cf40f1a7450d635dc6/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 20:57:51 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:51.020924148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 20:57:51 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:51.021969159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:57:51 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:51.025552194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 20:57:51 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:51.025635295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:57:51 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:51.141380152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 20:57:51 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:51.144586284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:57:51 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:51.144875187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 20:57:51 multinode-144300 dockerd[1022]: time="2023-07-06T20:57:51.145409392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:58:06 multinode-144300 dockerd[1016]: time="2023-07-06T20:58:06.533213360Z" level=info msg="ignoring event" container=4ef833c701642d2d0c9ab2f26d914fa6ff8cc864c0b2224b69bf2522df95b5e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 06 20:58:06 multinode-144300 dockerd[1022]: time="2023-07-06T20:58:06.533406661Z" level=info msg="shim disconnected" id=4ef833c701642d2d0c9ab2f26d914fa6ff8cc864c0b2224b69bf2522df95b5e8 namespace=moby
	Jul 06 20:58:06 multinode-144300 dockerd[1022]: time="2023-07-06T20:58:06.533905063Z" level=warning msg="cleaning up after shim disconnected" id=4ef833c701642d2d0c9ab2f26d914fa6ff8cc864c0b2224b69bf2522df95b5e8 namespace=moby
	Jul 06 20:58:06 multinode-144300 dockerd[1022]: time="2023-07-06T20:58:06.533916563Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 06 20:58:20 multinode-144300 dockerd[1022]: time="2023-07-06T20:58:20.241696291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 20:58:20 multinode-144300 dockerd[1022]: time="2023-07-06T20:58:20.241844393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 20:58:20 multinode-144300 dockerd[1022]: time="2023-07-06T20:58:20.241866193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 20:58:20 multinode-144300 dockerd[1022]: time="2023-07-06T20:58:20.241981295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	e0d29fca70d31       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   19dadd4a04248
	5b5472d79192c       ead0a4a53df89                                                                                         2 minutes ago       Running             coredns                   1                   f3b87c765a78b
	07a25b01fb622       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   dd4cc280f084d
	8e1a757ea338a       b0b1fa0f58c6e                                                                                         3 minutes ago       Running             kindnet-cni               1                   ed3126899bdc2
	4ef833c701642       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   19dadd4a04248
	ea971da32b0fd       5780543258cf0                                                                                         3 minutes ago       Running             kube-proxy                1                   1279048ada6ee
	981be2e468b52       86b6af7dd652c                                                                                         3 minutes ago       Running             etcd                      0                   60690f6f24763
	764db36598984       41697ceeb70b3                                                                                         3 minutes ago       Running             kube-scheduler            1                   01e075b2890f8
	877799b6b0593       7cffc01dba0e1                                                                                         3 minutes ago       Running             kube-controller-manager   1                   8972639b0525c
	4af3781c42208       08a0c939e61b7                                                                                         3 minutes ago       Running             kube-apiserver            0                   041e234e303f6
	0ec910823d675       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Exited              busybox                   0                   273f0a120cd48
	d9e48f8643f47       ead0a4a53df89                                                                                         13 minutes ago      Exited              coredns                   0                   791a2e3d6abe6
	2ec34877e4acd       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974              13 minutes ago      Exited              kindnet-cni               0                   c1aec25071ed9
	b92d8760a51ff       5780543258cf0                                                                                         13 minutes ago      Exited              kube-proxy                0                   eec796df46dbf
	775dc0b6d0dcc       41697ceeb70b3                                                                                         14 minutes ago      Exited              kube-scheduler            0                   04380a3faf912
	9deab8b718f35       7cffc01dba0e1                                                                                         14 minutes ago      Exited              kube-controller-manager   0                   f4d2e1b10e79b
	
	* 
	* ==> coredns [5b5472d79192] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 1008319858cd8849366c0f5555156ab5ce20cd98fedc211c6675234f8e435bfd28cd4ed3ec9afaafbad6dd8b85ab8681d4da6cc55eede0ec805bf7bd7719a5c3
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36516 - 27105 "HINFO IN 883103658202076329.9030612248448120684. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.054394744s
	
	* 
	* ==> coredns [d9e48f8643f4] <==
	* [INFO] 10.244.1.2:55798 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000664s
	[INFO] 10.244.1.2:38081 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000875s
	[INFO] 10.244.1.2:36525 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.0000699s
	[INFO] 10.244.1.2:44463 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000528s
	[INFO] 10.244.1.2:51138 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000591s
	[INFO] 10.244.1.2:54618 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081101s
	[INFO] 10.244.1.2:55676 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000454s
	[INFO] 10.244.0.3:38721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163001s
	[INFO] 10.244.0.3:42041 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108801s
	[INFO] 10.244.0.3:45947 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000160001s
	[INFO] 10.244.0.3:58157 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084901s
	[INFO] 10.244.1.2:34962 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129801s
	[INFO] 10.244.1.2:53801 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000182602s
	[INFO] 10.244.1.2:52790 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000122801s
	[INFO] 10.244.1.2:57732 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092401s
	[INFO] 10.244.0.3:36006 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116401s
	[INFO] 10.244.0.3:44100 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000105901s
	[INFO] 10.244.0.3:50791 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091301s
	[INFO] 10.244.0.3:49929 - 5 "PTR IN 1.64.29.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000064601s
	[INFO] 10.244.1.2:38982 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000951s
	[INFO] 10.244.1.2:50028 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000162502s
	[INFO] 10.244.1.2:38044 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.0000673s
	[INFO] 10.244.1.2:35547 - 5 "PTR IN 1.64.29.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000090801s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-144300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-144300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d384f293eb4d1ae13e8a16440afa4ec48ef3148
	                    minikube.k8s.io/name=multinode-144300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_06T20_46_37_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 06 Jul 2023 20:46:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-144300
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 06 Jul 2023 21:00:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 06 Jul 2023 20:57:42 +0000   Thu, 06 Jul 2023 20:46:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 06 Jul 2023 20:57:42 +0000   Thu, 06 Jul 2023 20:46:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 06 Jul 2023 20:57:42 +0000   Thu, 06 Jul 2023 20:46:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 06 Jul 2023 20:57:42 +0000   Thu, 06 Jul 2023 20:57:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.78.0
	  Hostname:    multinode-144300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 0891aed6ce65404a9b34cbce2da0c8a3
	  System UUID:                f2b24827-fd9a-be40-b7bb-ed0eca8a4e3a
	  Boot ID:                    2376b5e8-6764-44fa-b66a-2e18e26a31fe
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-47tnt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5d78c9869d-m7j99                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-144300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m4s
	  kube-system                 kindnet-9pjnm                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-144300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kube-system                 kube-controller-manager-multinode-144300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-h6h62                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-144300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m1s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-144300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-144300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-144300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-144300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-144300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-144300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-144300 event: Registered Node multinode-144300 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-144300 status is now: NodeReady
	  Normal  Starting                 3m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m11s (x8 over 3m11s)  kubelet          Node multinode-144300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m11s (x8 over 3m11s)  kubelet          Node multinode-144300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m11s (x7 over 3m11s)  kubelet          Node multinode-144300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m52s                  node-controller  Node multinode-144300 event: Registered Node multinode-144300 in Controller
	
	
	Name:               multinode-144300-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-144300-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 06 Jul 2023 20:58:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-144300-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 06 Jul 2023 21:00:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 06 Jul 2023 20:59:07 +0000   Thu, 06 Jul 2023 20:58:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 06 Jul 2023 20:59:07 +0000   Thu, 06 Jul 2023 20:58:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 06 Jul 2023 20:59:07 +0000   Thu, 06 Jul 2023 20:58:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 06 Jul 2023 20:59:07 +0000   Thu, 06 Jul 2023 20:59:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.74.65
	  Hostname:    multinode-144300-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 5d1659ef74bf44dabeae34d9ad27fa70
	  System UUID:                d86403da-f9b6-a346-9afe-e8d51877b934
	  Boot ID:                    83dd570d-61c5-4500-8be5-8628259359a9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-67b7f59bb-bfmd7    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kindnet-z6sjf              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-f5vmt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  Starting                 99s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    12m (x2 over 12m)    kubelet          Node multinode-144300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x2 over 12m)    kubelet          Node multinode-144300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  12m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m (x2 over 12m)    kubelet          Node multinode-144300-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 12m                  kubelet          Starting kubelet.
	  Normal  NodeReady                11m                  kubelet          Node multinode-144300-m02 status is now: NodeReady
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x2 over 102s)  kubelet          Node multinode-144300-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x2 over 102s)  kubelet          Node multinode-144300-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x2 over 102s)  kubelet          Node multinode-144300-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  102s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           97s                  node-controller  Node multinode-144300-m02 event: Registered Node multinode-144300-m02 in Controller
	  Normal  NodeReady                91s                  kubelet          Node multinode-144300-m02 status is now: NodeReady
	
	
	Name:               multinode-144300-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-144300-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 06 Jul 2023 21:00:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-144300-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 06 Jul 2023 21:00:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 06 Jul 2023 21:00:27 +0000   Thu, 06 Jul 2023 21:00:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 06 Jul 2023 21:00:27 +0000   Thu, 06 Jul 2023 21:00:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 06 Jul 2023 21:00:27 +0000   Thu, 06 Jul 2023 21:00:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 06 Jul 2023 21:00:27 +0000   Thu, 06 Jul 2023 21:00:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.78.173
	  Hostname:    multinode-144300-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165980Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a180672574845b88cd731065d2430c7
	  System UUID:                bfa92a6c-caa3-f04c-aa71-aa2dc52ba2f0
	  Boot ID:                    770f40cc-bad4-440f-9d6d-4429a1b2b50b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jhjpn       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m43s
	  kube-system                 kube-proxy-x7bwf    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 9m33s                  kube-proxy  
	  Normal  Starting                 26s                    kube-proxy  
	  Normal  Starting                 5m25s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  9m43s (x5 over 9m45s)  kubelet     Node multinode-144300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m43s (x5 over 9m45s)  kubelet     Node multinode-144300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m43s (x5 over 9m45s)  kubelet     Node multinode-144300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m25s                  kubelet     Node multinode-144300-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m28s (x2 over 5m28s)  kubelet     Node multinode-144300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m28s (x2 over 5m28s)  kubelet     Node multinode-144300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m28s (x2 over 5m28s)  kubelet     Node multinode-144300-m03 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m28s                  kubelet     Starting kubelet.
	  Normal  NodeReady                5m18s                  kubelet     Node multinode-144300-m03 status is now: NodeReady
	  Normal  Starting                 30s                    kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s (x2 over 30s)      kubelet     Node multinode-144300-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x2 over 30s)      kubelet     Node multinode-144300-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x2 over 30s)      kubelet     Node multinode-144300-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                11s                    kubelet     Node multinode-144300-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	*                 "trace_clock=local"
	              on the kernel command line
	[  +0.000023] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.983312] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.139831] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.097832] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.156443] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000025] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jul 6 20:57] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.125922] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +8.321557] systemd-fstab-generator[944]: Ignoring "noauto" for root device
	[  +0.480218] systemd-fstab-generator[983]: Ignoring "noauto" for root device
	[  +0.134695] systemd-fstab-generator[994]: Ignoring "noauto" for root device
	[  +0.169521] systemd-fstab-generator[1007]: Ignoring "noauto" for root device
	[  +1.434868] kauditd_printk_skb: 28 callbacks suppressed
	[  +0.346624] systemd-fstab-generator[1180]: Ignoring "noauto" for root device
	[  +0.147002] systemd-fstab-generator[1191]: Ignoring "noauto" for root device
	[  +0.124138] systemd-fstab-generator[1202]: Ignoring "noauto" for root device
	[  +0.147768] systemd-fstab-generator[1213]: Ignoring "noauto" for root device
	[  +0.159482] systemd-fstab-generator[1227]: Ignoring "noauto" for root device
	[  +3.915474] systemd-fstab-generator[1444]: Ignoring "noauto" for root device
	[  +0.775747] kauditd_printk_skb: 29 callbacks suppressed
	[ +19.340316] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [981be2e468b5] <==
	* {"level":"info","ts":"2023-07-06T20:57:30.370Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T20:57:30.370Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T20:57:30.371Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20534944f3f72b4 switched to configuration voters=(145580374548771508)"}
	{"level":"info","ts":"2023-07-06T20:57:30.371Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ae3d7b74d33a3bd5","local-member-id":"20534944f3f72b4","added-peer-id":"20534944f3f72b4","added-peer-peer-urls":["https://172.29.70.202:2380"]}
	{"level":"info","ts":"2023-07-06T20:57:30.371Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ae3d7b74d33a3bd5","local-member-id":"20534944f3f72b4","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T20:57:30.371Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T20:57:30.374Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-06T20:57:30.375Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"20534944f3f72b4","initial-advertise-peer-urls":["https://172.29.78.0:2380"],"listen-peer-urls":["https://172.29.78.0:2380"],"advertise-client-urls":["https://172.29.78.0:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.78.0:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-06T20:57:30.376Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-06T20:57:30.399Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.29.78.0:2380"}
	{"level":"info","ts":"2023-07-06T20:57:30.399Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.29.78.0:2380"}
	{"level":"info","ts":"2023-07-06T20:57:31.625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20534944f3f72b4 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-06T20:57:31.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20534944f3f72b4 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-06T20:57:31.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20534944f3f72b4 received MsgPreVoteResp from 20534944f3f72b4 at term 2"}
	{"level":"info","ts":"2023-07-06T20:57:31.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20534944f3f72b4 became candidate at term 3"}
	{"level":"info","ts":"2023-07-06T20:57:31.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20534944f3f72b4 received MsgVoteResp from 20534944f3f72b4 at term 3"}
	{"level":"info","ts":"2023-07-06T20:57:31.626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20534944f3f72b4 became leader at term 3"}
	{"level":"info","ts":"2023-07-06T20:57:31.627Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 20534944f3f72b4 elected leader 20534944f3f72b4 at term 3"}
	{"level":"info","ts":"2023-07-06T20:57:31.631Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"20534944f3f72b4","local-member-attributes":"{Name:multinode-144300 ClientURLs:[https://172.29.78.0:2379]}","request-path":"/0/members/20534944f3f72b4/attributes","cluster-id":"ae3d7b74d33a3bd5","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-06T20:57:31.631Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T20:57:31.632Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T20:57:31.634Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.29.78.0:2379"}
	{"level":"info","ts":"2023-07-06T20:57:31.635Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-06T20:57:31.636Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-06T20:57:31.637Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  21:00:38 up 3 min,  0 users,  load average: 0.07, 0.19, 0.09
	Linux multinode-144300 5.10.57 #1 SMP Fri Jun 30 21:41:53 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [2ec34877e4ac] <==
	* I0706 20:55:08.574853       1 main.go:223] Handling node with IPs: map[172.29.70.202:{}]
	I0706 20:55:08.574892       1 main.go:227] handling current node
	I0706 20:55:08.574903       1 main.go:223] Handling node with IPs: map[172.29.79.241:{}]
	I0706 20:55:08.574909       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 20:55:08.575180       1 main.go:223] Handling node with IPs: map[172.29.66.203:{}]
	I0706 20:55:08.575260       1 main.go:250] Node multinode-144300-m03 has CIDR [10.244.2.0/24] 
	I0706 20:55:18.586891       1 main.go:223] Handling node with IPs: map[172.29.70.202:{}]
	I0706 20:55:18.586989       1 main.go:227] handling current node
	I0706 20:55:18.587003       1 main.go:223] Handling node with IPs: map[172.29.79.241:{}]
	I0706 20:55:18.587010       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 20:55:18.587518       1 main.go:223] Handling node with IPs: map[172.29.66.123:{}]
	I0706 20:55:18.587663       1 main.go:250] Node multinode-144300-m03 has CIDR [10.244.3.0/24] 
	I0706 20:55:18.588056       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.66.123 Flags: [] Table: 0} 
	I0706 20:55:28.599696       1 main.go:223] Handling node with IPs: map[172.29.70.202:{}]
	I0706 20:55:28.599836       1 main.go:227] handling current node
	I0706 20:55:28.599849       1 main.go:223] Handling node with IPs: map[172.29.79.241:{}]
	I0706 20:55:28.599857       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 20:55:28.599973       1 main.go:223] Handling node with IPs: map[172.29.66.123:{}]
	I0706 20:55:28.600065       1 main.go:250] Node multinode-144300-m03 has CIDR [10.244.3.0/24] 
	I0706 20:55:38.605602       1 main.go:223] Handling node with IPs: map[172.29.70.202:{}]
	I0706 20:55:38.605676       1 main.go:227] handling current node
	I0706 20:55:38.605691       1 main.go:223] Handling node with IPs: map[172.29.79.241:{}]
	I0706 20:55:38.605972       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 20:55:38.606446       1 main.go:223] Handling node with IPs: map[172.29.66.123:{}]
	I0706 20:55:38.606525       1 main.go:250] Node multinode-144300-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kindnet [8e1a757ea338] <==
	* I0706 21:00:00.461030       1 main.go:223] Handling node with IPs: map[172.29.78.0:{}]
	I0706 21:00:00.461129       1 main.go:227] handling current node
	I0706 21:00:00.461145       1 main.go:223] Handling node with IPs: map[172.29.74.65:{}]
	I0706 21:00:00.461152       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 21:00:00.461593       1 main.go:223] Handling node with IPs: map[172.29.66.123:{}]
	I0706 21:00:00.461672       1 main.go:250] Node multinode-144300-m03 has CIDR [10.244.3.0/24] 
	I0706 21:00:10.479018       1 main.go:223] Handling node with IPs: map[172.29.78.0:{}]
	I0706 21:00:10.479149       1 main.go:227] handling current node
	I0706 21:00:10.479165       1 main.go:223] Handling node with IPs: map[172.29.74.65:{}]
	I0706 21:00:10.479276       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 21:00:10.479793       1 main.go:223] Handling node with IPs: map[172.29.78.173:{}]
	I0706 21:00:10.479810       1 main.go:250] Node multinode-144300-m03 has CIDR [10.244.2.0/24] 
	I0706 21:00:10.479875       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.29.78.173 Flags: [] Table: 0} 
	I0706 21:00:20.486734       1 main.go:223] Handling node with IPs: map[172.29.78.0:{}]
	I0706 21:00:20.486868       1 main.go:227] handling current node
	I0706 21:00:20.486984       1 main.go:223] Handling node with IPs: map[172.29.74.65:{}]
	I0706 21:00:20.487093       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 21:00:20.487396       1 main.go:223] Handling node with IPs: map[172.29.78.173:{}]
	I0706 21:00:20.487451       1 main.go:250] Node multinode-144300-m03 has CIDR [10.244.2.0/24] 
	I0706 21:00:30.496808       1 main.go:223] Handling node with IPs: map[172.29.78.0:{}]
	I0706 21:00:30.496866       1 main.go:227] handling current node
	I0706 21:00:30.496879       1 main.go:223] Handling node with IPs: map[172.29.74.65:{}]
	I0706 21:00:30.496902       1 main.go:250] Node multinode-144300-m02 has CIDR [10.244.1.0/24] 
	I0706 21:00:30.497451       1 main.go:223] Handling node with IPs: map[172.29.78.173:{}]
	I0706 21:00:30.497511       1 main.go:250] Node multinode-144300-m03 has CIDR [10.244.2.0/24] 
	
	* 
	* ==> kube-apiserver [4af3781c4220] <==
	* I0706 20:57:33.357032       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0706 20:57:33.357044       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0706 20:57:33.410389       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0706 20:57:33.457079       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0706 20:57:33.457146       1 aggregator.go:152] initial CRD sync complete...
	I0706 20:57:33.457154       1 autoregister_controller.go:141] Starting autoregister controller
	I0706 20:57:33.457160       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0706 20:57:33.457166       1 cache.go:39] Caches are synced for autoregister controller
	I0706 20:57:33.485188       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0706 20:57:33.489988       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0706 20:57:33.490003       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0706 20:57:33.491386       1 shared_informer.go:318] Caches are synced for configmaps
	I0706 20:57:33.492154       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0706 20:57:33.495413       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0706 20:57:33.496801       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0706 20:57:33.939771       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0706 20:57:34.314922       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0706 20:57:34.842851       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.29.78.0]
	I0706 20:57:34.844832       1 controller.go:624] quota admission added evaluator for: endpoints
	I0706 20:57:34.869697       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0706 20:57:36.785066       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0706 20:57:36.983968       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0706 20:57:37.006549       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0706 20:57:37.130698       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0706 20:57:37.142542       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [877799b6b059] <==
	* I0706 20:57:46.582808       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 20:57:46.582913       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0706 20:57:46.619867       1 shared_informer.go:318] Caches are synced for garbage collector
	W0706 20:58:26.054428       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m03 node
	I0706 20:58:26.055568       1 event.go:307] "Event occurred" object="multinode-144300-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-144300-m02 status is now: NodeNotReady"
	I0706 20:58:26.065870       1 event.go:307] "Event occurred" object="multinode-144300-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-144300-m03 status is now: NodeNotReady"
	I0706 20:58:26.079758       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-f5vmt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0706 20:58:26.095512       1 event.go:307] "Event occurred" object="kube-system/kindnet-jhjpn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0706 20:58:26.106349       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-qp6pw" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0706 20:58:26.114326       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-x7bwf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0706 20:58:26.127957       1 event.go:307] "Event occurred" object="kube-system/kindnet-z6sjf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0706 20:58:52.088220       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-bfmd7"
	I0706 20:58:56.134034       1 event.go:307] "Event occurred" object="multinode-144300-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-144300-m02 event: Removing Node multinode-144300-m02 from Controller"
	I0706 20:58:56.402206       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-144300-m02\" does not exist"
	I0706 20:58:56.407417       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-qp6pw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-qp6pw"
	I0706 20:58:56.412605       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-144300-m02" podCIDRs=[10.244.1.0/24]
	I0706 20:59:01.135987       1 event.go:307] "Event occurred" object="multinode-144300-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-144300-m02 event: Registered Node multinode-144300-m02 in Controller"
	W0706 20:59:07.562976       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	I0706 20:59:11.174766       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-bfmd7" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-bfmd7"
	I0706 20:59:11.174806       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb-qp6pw" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-67b7f59bb-qp6pw"
	W0706 21:00:07.355135       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	I0706 21:00:08.930243       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-144300-m03\" does not exist"
	W0706 21:00:08.931280       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	I0706 21:00:08.941781       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-144300-m03" podCIDRs=[10.244.2.0/24]
	W0706 21:00:27.749090       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	
	* 
	* ==> kube-controller-manager [9deab8b718f3] <==
	* I0706 20:48:24.538476       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-f5vmt"
	I0706 20:48:24.561694       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-144300-m02" podCIDRs=[10.244.1.0/24]
	I0706 20:48:28.746279       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-144300-m02"
	I0706 20:48:28.746341       1 event.go:307] "Event occurred" object="multinode-144300-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-144300-m02 event: Registered Node multinode-144300-m02 in Controller"
	W0706 20:48:39.304147       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	I0706 20:48:50.630613       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-67b7f59bb to 2"
	I0706 20:48:50.671519       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-qp6pw"
	I0706 20:48:50.712414       1 event.go:307] "Event occurred" object="default/busybox-67b7f59bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-67b7f59bb-47tnt"
	W0706 20:50:55.147604       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	I0706 20:50:55.149914       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-144300-m03\" does not exist"
	I0706 20:50:55.167068       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-144300-m03" podCIDRs=[10.244.2.0/24]
	I0706 20:50:55.197919       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-x7bwf"
	I0706 20:50:55.197956       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-jhjpn"
	I0706 20:50:58.794644       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-144300-m03"
	I0706 20:50:58.795094       1 event.go:307] "Event occurred" object="multinode-144300-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-144300-m03 event: Registered Node multinode-144300-m03 in Controller"
	W0706 20:51:13.152137       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	W0706 20:54:18.866895       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	I0706 20:54:18.867097       1 event.go:307] "Event occurred" object="multinode-144300-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-144300-m03 status is now: NodeNotReady"
	I0706 20:54:18.895145       1 event.go:307] "Event occurred" object="kube-system/kindnet-jhjpn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0706 20:54:18.914482       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-x7bwf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	W0706 20:55:09.658208       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	I0706 20:55:10.867494       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-144300-m03\" does not exist"
	W0706 20:55:10.868672       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	I0706 20:55:10.886509       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-144300-m03" podCIDRs=[10.244.3.0/24]
	W0706 20:55:20.157382       1 topologycache.go:232] Can't get CPU or zone information for multinode-144300-m02 node
	
	* 
	* ==> kube-proxy [b92d8760a51f] <==
	* I0706 20:46:50.479465       1 node.go:141] Successfully retrieved node IP: 172.29.70.202
	I0706 20:46:50.479762       1 server_others.go:110] "Detected node IP" address="172.29.70.202"
	I0706 20:46:50.479793       1 server_others.go:554] "Using iptables proxy"
	I0706 20:46:50.544792       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0706 20:46:50.544848       1 server_others.go:192] "Using iptables Proxier"
	I0706 20:46:50.546832       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0706 20:46:50.548167       1 server.go:658] "Version info" version="v1.27.3"
	I0706 20:46:50.548186       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 20:46:50.551347       1 config.go:188] "Starting service config controller"
	I0706 20:46:50.551435       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0706 20:46:50.552682       1 config.go:97] "Starting endpoint slice config controller"
	I0706 20:46:50.552860       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0706 20:46:50.575014       1 config.go:315] "Starting node config controller"
	I0706 20:46:50.575070       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0706 20:46:50.652948       1 shared_informer.go:318] Caches are synced for service config
	I0706 20:46:50.653084       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0706 20:46:50.675520       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [ea971da32b0f] <==
	* I0706 20:57:36.523344       1 node.go:141] Successfully retrieved node IP: 172.29.78.0
	I0706 20:57:36.523611       1 server_others.go:110] "Detected node IP" address="172.29.78.0"
	I0706 20:57:36.523640       1 server_others.go:554] "Using iptables proxy"
	I0706 20:57:36.662711       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0706 20:57:36.662771       1 server_others.go:192] "Using iptables Proxier"
	I0706 20:57:36.671689       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0706 20:57:36.675623       1 server.go:658] "Version info" version="v1.27.3"
	I0706 20:57:36.675641       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 20:57:36.679524       1 config.go:188] "Starting service config controller"
	I0706 20:57:36.679715       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0706 20:57:36.679873       1 config.go:315] "Starting node config controller"
	I0706 20:57:36.680178       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0706 20:57:36.684411       1 config.go:97] "Starting endpoint slice config controller"
	I0706 20:57:36.684733       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0706 20:57:36.780728       1 shared_informer.go:318] Caches are synced for node config
	I0706 20:57:36.780762       1 shared_informer.go:318] Caches are synced for service config
	I0706 20:57:36.785254       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [764db3659898] <==
	* I0706 20:57:30.748312       1 serving.go:348] Generated self-signed cert in-memory
	W0706 20:57:33.391174       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0706 20:57:33.391597       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0706 20:57:33.391693       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0706 20:57:33.392971       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0706 20:57:33.423809       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0706 20:57:33.423902       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 20:57:33.428384       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0706 20:57:33.429173       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0706 20:57:33.429417       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0706 20:57:33.429673       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0706 20:57:33.530656       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [775dc0b6d0dc] <==
	* W0706 20:46:33.303483       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0706 20:46:33.303591       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0706 20:46:33.487559       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0706 20:46:33.487607       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0706 20:46:33.498950       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0706 20:46:33.498972       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0706 20:46:33.501016       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0706 20:46:33.501208       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0706 20:46:33.529401       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0706 20:46:33.529427       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0706 20:46:33.573133       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0706 20:46:33.573673       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0706 20:46:33.600606       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0706 20:46:33.600636       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0706 20:46:33.701202       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0706 20:46:33.701307       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0706 20:46:33.753911       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0706 20:46:33.754185       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0706 20:46:33.761519       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0706 20:46:33.761859       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0706 20:46:35.361091       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0706 20:55:40.640689       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0706 20:55:40.641794       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0706 20:55:40.642028       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0706 20:55:40.642058       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-07-06 20:56:48 UTC, ends at Thu 2023-07-06 21:00:39 UTC. --
	Jul 06 20:57:41 multinode-144300 kubelet[1450]: E0706 20:57:41.890674    1450 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jul 06 20:57:41 multinode-144300 kubelet[1450]: E0706 20:57:41.890705    1450 projected.go:198] Error preparing data for projected volume kube-api-access-bwbl6 for pod default/busybox-67b7f59bb-47tnt: object "default"/"kube-root-ca.crt" not registered
	Jul 06 20:57:41 multinode-144300 kubelet[1450]: E0706 20:57:41.890752    1450 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/12f3117a-c156-4909-8cd7-117df3106624-kube-api-access-bwbl6 podName:12f3117a-c156-4909-8cd7-117df3106624 nodeName:}" failed. No retries permitted until 2023-07-06 20:57:49.890736388 +0000 UTC m=+23.436562257 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bwbl6" (UniqueName: "kubernetes.io/projected/12f3117a-c156-4909-8cd7-117df3106624-kube-api-access-bwbl6") pod "busybox-67b7f59bb-47tnt" (UID: "12f3117a-c156-4909-8cd7-117df3106624") : object "default"/"kube-root-ca.crt" not registered
	Jul 06 20:57:41 multinode-144300 kubelet[1450]: E0706 20:57:41.890768    1450 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 06 20:57:41 multinode-144300 kubelet[1450]: E0706 20:57:41.890821    1450 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dfa019d5-9528-4f25-8aab-03d1d276bb0c-config-volume podName:dfa019d5-9528-4f25-8aab-03d1d276bb0c nodeName:}" failed. No retries permitted until 2023-07-06 20:57:49.89080879 +0000 UTC m=+23.436634659 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/dfa019d5-9528-4f25-8aab-03d1d276bb0c-config-volume") pod "coredns-5d78c9869d-m7j99" (UID: "dfa019d5-9528-4f25-8aab-03d1d276bb0c") : object "kube-system"/"coredns" not registered
	Jul 06 20:57:42 multinode-144300 kubelet[1450]: E0706 20:57:42.115503    1450 pod_workers.go:1294] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-5d78c9869d-m7j99" podUID=dfa019d5-9528-4f25-8aab-03d1d276bb0c
	Jul 06 20:57:42 multinode-144300 kubelet[1450]: I0706 20:57:42.574911    1450 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 06 20:58:07 multinode-144300 kubelet[1450]: I0706 20:58:07.520540    1450 scope.go:115] "RemoveContainer" containerID="7d425ac2e145fa8ae6f3e914c0c98aef74614f55c624b72bb5928c749c526259"
	Jul 06 20:58:07 multinode-144300 kubelet[1450]: I0706 20:58:07.521056    1450 scope.go:115] "RemoveContainer" containerID="4ef833c701642d2d0c9ab2f26d914fa6ff8cc864c0b2224b69bf2522df95b5e8"
	Jul 06 20:58:07 multinode-144300 kubelet[1450]: E0706 20:58:07.521386    1450 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(75b208e7-5f24-4849-867c-c7fa45213999)\"" pod="kube-system/storage-provisioner" podUID=75b208e7-5f24-4849-867c-c7fa45213999
	Jul 06 20:58:20 multinode-144300 kubelet[1450]: I0706 20:58:20.115027    1450 scope.go:115] "RemoveContainer" containerID="4ef833c701642d2d0c9ab2f26d914fa6ff8cc864c0b2224b69bf2522df95b5e8"
	Jul 06 20:58:27 multinode-144300 kubelet[1450]: I0706 20:58:27.102074    1450 scope.go:115] "RemoveContainer" containerID="f7157ce4715f98e188da02394a20d0175ce4b1b5733d8c0d3cb89c18b6a396b1"
	Jul 06 20:58:27 multinode-144300 kubelet[1450]: E0706 20:58:27.157218    1450 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 06 20:58:27 multinode-144300 kubelet[1450]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 06 20:58:27 multinode-144300 kubelet[1450]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 06 20:58:27 multinode-144300 kubelet[1450]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 06 20:58:27 multinode-144300 kubelet[1450]: I0706 20:58:27.160945    1450 scope.go:115] "RemoveContainer" containerID="67b35d14730ac347a854f8cac72336014192f32ab8fee38864c05a10f221e1f3"
	Jul 06 20:59:27 multinode-144300 kubelet[1450]: E0706 20:59:27.145096    1450 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 06 20:59:27 multinode-144300 kubelet[1450]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 06 20:59:27 multinode-144300 kubelet[1450]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 06 20:59:27 multinode-144300 kubelet[1450]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 06 21:00:27 multinode-144300 kubelet[1450]: E0706 21:00:27.146762    1450 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 06 21:00:27 multinode-144300 kubelet[1450]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 06 21:00:27 multinode-144300 kubelet[1450]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 06 21:00:27 multinode-144300 kubelet[1450]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-144300 -n multinode-144300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-144300 -n multinode-144300: (4.5994916s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-144300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (311.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (462.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.6.2.2792221438.exe start -p running-upgrade-715200 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:132: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.6.2.2792221438.exe start -p running-upgrade-715200 --memory=2200 --vm-driver=hyperv: (4m10.6226399s)
version_upgrade_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-715200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:142: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p running-upgrade-715200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (2m31.8477349s)

                                                
                                                
-- stdout --
	* [running-upgrade-715200] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the hyperv driver based on existing profile
	* Starting control plane node running-upgrade-715200 in cluster running-upgrade-715200
	* Updating the running hyperv "running-upgrade-715200" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 21:20:47.371558    9892 out.go:296] Setting OutFile to fd 1428 ...
	I0706 21:20:47.431434    9892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:20:47.431434    9892 out.go:309] Setting ErrFile to fd 1492...
	I0706 21:20:47.431434    9892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:20:47.462310    9892 out.go:303] Setting JSON to false
	I0706 21:20:47.467542    9892 start.go:127] hostinfo: {"hostname":"minikube6","uptime":496584,"bootTime":1688181863,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 21:20:47.467542    9892 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 21:20:47.471441    9892 out.go:177] * [running-upgrade-715200] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 21:20:47.476108    9892 notify.go:220] Checking for updates...
	I0706 21:20:47.478927    9892 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:20:47.482216    9892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 21:20:47.485324    9892 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 21:20:47.487489    9892 out.go:177]   - MINIKUBE_LOCATION=16832
	I0706 21:20:47.492957    9892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 21:20:47.497096    9892 config.go:182] Loaded profile config "running-upgrade-715200": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0706 21:20:47.497194    9892 start_flags.go:683] config upgrade: Driver=hyperv
	I0706 21:20:47.497247    9892 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0706 21:20:47.497483    9892 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-715200\config.json ...
	I0706 21:20:47.505108    9892 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0706 21:20:47.507036    9892 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 21:20:49.466995    9892 out.go:177] * Using the hyperv driver based on existing profile
	I0706 21:20:49.473509    9892 start.go:297] selected driver: hyperv
	I0706 21:20:49.473509    9892 start.go:944] validating driver "hyperv" against &{Name:running-upgrade-715200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.70.232 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0706 21:20:49.473509    9892 start.go:955] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 21:20:49.533247    9892 cni.go:84] Creating CNI manager for ""
	I0706 21:20:49.533247    9892 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 21:20:49.533247    9892 start_flags.go:319] config:
	{Name:running-upgrade-715200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.70.232 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 21:20:49.533960    9892 iso.go:125] acquiring lock: {Name:mk7608081596fbafb2a8a8984d26768bdf174467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:20:49.535828    9892 out.go:177] * Starting control plane node running-upgrade-715200 in cluster running-upgrade-715200
	I0706 21:20:49.542865    9892 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0706 21:20:49.584805    9892 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0706 21:20:49.585141    9892 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\running-upgrade-715200\config.json ...
	I0706 21:20:49.585224    9892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0706 21:20:49.585224    9892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0706 21:20:49.585224    9892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0706 21:20:49.585329    9892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0706 21:20:49.585224    9892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0706 21:20:49.585430    9892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0706 21:20:49.585430    9892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I0706 21:20:49.585224    9892 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0706 21:20:49.593123    9892 start.go:365] acquiring machines lock for running-upgrade-715200: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 21:20:49.799797    9892 cache.go:107] acquiring lock: {Name:mk9be06f7dda6cd9f88a49bfda7b93c646500d50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:20:49.800795    9892 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0706 21:20:49.801838    9892 cache.go:107] acquiring lock: {Name:mkc4fee9499fca2b7d7c08251407e6b74559e928 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:20:49.802176    9892 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0706 21:20:49.802658    9892 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 217.1302ms
	I0706 21:20:49.802658    9892 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0706 21:20:49.804738    9892 cache.go:107] acquiring lock: {Name:mk99c024335b2b0df925d0a4f8be63420005fb4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:20:49.804882    9892 cache.go:107] acquiring lock: {Name:mk05ded54209e4a708242c69b9964ff468668659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:20:49.805653    9892 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0706 21:20:49.806552    9892 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0706 21:20:49.807806    9892 cache.go:107] acquiring lock: {Name:mke18763d5cf9d9bc5d35bbd720515199764b657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:20:49.808471    9892 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0706 21:20:49.816399    9892 cache.go:107] acquiring lock: {Name:mk954426991f5fcd2ab5db06d1d0131a8f1b324a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:20:49.816661    9892 cache.go:107] acquiring lock: {Name:mkf546fa1bb9082500c9826611ed614659cd3a1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:20:49.816887    9892 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0706 21:20:49.816887    9892 cache.go:107] acquiring lock: {Name:mk973f8dac8863f6898376a57143f99d2ac9f288 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:20:49.816887    9892 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0706 21:20:49.816887    9892 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0706 21:20:49.870709    9892 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0706 21:20:49.883050    9892 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0706 21:20:49.883050    9892 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0706 21:20:49.883050    9892 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0706 21:20:49.884321    9892 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0706 21:20:49.884936    9892 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0706 21:20:49.884936    9892 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	W0706 21:20:50.038732    9892 image.go:187] authn lookup for registry.k8s.io/kube-scheduler:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0706 21:20:50.198637    9892 image.go:187] authn lookup for registry.k8s.io/kube-proxy:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0706 21:20:50.331613    9892 image.go:187] authn lookup for registry.k8s.io/kube-controller-manager:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0706 21:20:50.470571    9892 image.go:187] authn lookup for registry.k8s.io/coredns:1.6.5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0706 21:20:50.601207    9892 image.go:187] authn lookup for registry.k8s.io/etcd:3.4.3-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W0706 21:20:50.719148    9892 image.go:187] authn lookup for registry.k8s.io/kube-apiserver:v1.17.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0706 21:20:50.733960    9892 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0706 21:20:50.795110    9892 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0706 21:20:50.814384    9892 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	W0706 21:20:50.873027    9892 image.go:187] authn lookup for registry.k8s.io/pause:3.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I0706 21:20:50.885814    9892 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0706 21:20:51.035399    9892 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0706 21:20:51.080857    9892 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0706 21:20:51.199708    9892 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I0706 21:20:51.200331    9892 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 1.6149901s
	I0706 21:20:51.200331    9892 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I0706 21:20:51.733992    9892 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I0706 21:20:51.733992    9892 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 2.1483371s
	I0706 21:20:51.740228    9892 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I0706 21:20:51.884503    9892 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I0706 21:20:51.886286    9892 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 2.3008396s
	I0706 21:20:51.886286    9892 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I0706 21:20:52.363956    9892 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I0706 21:20:52.364133    9892 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 2.7788882s
	I0706 21:20:52.364263    9892 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I0706 21:20:52.727511    9892 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I0706 21:20:52.727511    9892 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 3.1418484s
	I0706 21:20:52.727511    9892 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I0706 21:20:52.885395    9892 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0706 21:20:53.369718    9892 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I0706 21:20:53.369718    9892 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 3.7841642s
	I0706 21:20:53.369718    9892 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I0706 21:20:53.583517    9892 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I0706 21:20:53.583517    9892 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 3.998264s
	I0706 21:20:53.583517    9892 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I0706 21:20:53.583517    9892 cache.go:87] Successfully saved all images to host disk.
	I0706 21:22:28.412898    9892 start.go:369] acquired machines lock for "running-upgrade-715200" in 1m38.8190539s
	I0706 21:22:28.413111    9892 start.go:96] Skipping create...Using existing machine configuration
	I0706 21:22:28.413196    9892 fix.go:54] fixHost starting: minikube
	I0706 21:22:28.414146    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:22:29.140547    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:22:29.140547    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:29.140547    9892 fix.go:102] recreateIfNeeded on running-upgrade-715200: state=Running err=<nil>
	W0706 21:22:29.140547    9892 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 21:22:29.272340    9892 out.go:177] * Updating the running hyperv "running-upgrade-715200" VM ...
	I0706 21:22:29.337958    9892 machine.go:88] provisioning docker machine ...
	I0706 21:22:29.338019    9892 buildroot.go:166] provisioning hostname "running-upgrade-715200"
	I0706 21:22:29.338019    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:22:30.028575    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:22:30.028575    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:30.028575    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:22:31.106358    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:22:31.106358    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:31.110363    9892 main.go:141] libmachine: Using SSH client type: native
	I0706 21:22:31.111283    9892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.232 22 <nil> <nil>}
	I0706 21:22:31.111354    9892 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-715200 && echo "running-upgrade-715200" | sudo tee /etc/hostname
	I0706 21:22:31.248439    9892 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-715200
	
	I0706 21:22:31.248439    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:22:31.950229    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:22:31.950229    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:31.950229    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:22:33.000075    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:22:33.000075    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:33.004337    9892 main.go:141] libmachine: Using SSH client type: native
	I0706 21:22:33.005415    9892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.232 22 <nil> <nil>}
	I0706 21:22:33.005415    9892 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-715200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-715200/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-715200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 21:22:33.140089    9892 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 21:22:33.140089    9892 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0706 21:22:33.140089    9892 buildroot.go:174] setting up certificates
	I0706 21:22:33.140089    9892 provision.go:83] configureAuth start
	I0706 21:22:33.140089    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:22:33.812466    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:22:33.812466    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:33.812466    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:22:34.797783    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:22:34.798273    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:34.798273    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:22:35.458784    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:22:35.458784    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:35.458784    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:22:36.435794    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:22:36.435794    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:36.435794    9892 provision.go:138] copyHostCerts
	I0706 21:22:36.436349    9892 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0706 21:22:36.436349    9892 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0706 21:22:36.436820    9892 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0706 21:22:36.437484    9892 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0706 21:22:36.437484    9892 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0706 21:22:36.438330    9892 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0706 21:22:36.439084    9892 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0706 21:22:36.439084    9892 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0706 21:22:36.439910    9892 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0706 21:22:36.440948    9892 provision.go:112] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.running-upgrade-715200 san=[172.29.70.232 172.29.70.232 localhost 127.0.0.1 minikube running-upgrade-715200]
	I0706 21:22:36.548787    9892 provision.go:172] copyRemoteCerts
	I0706 21:22:36.559013    9892 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 21:22:36.559013    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:22:37.233192    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:22:37.233192    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:37.233192    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:22:38.230218    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:22:38.230371    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:38.230534    9892 sshutil.go:53] new ssh client: &{IP:172.29.70.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\running-upgrade-715200\id_rsa Username:docker}
	I0706 21:22:38.342809    9892 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.7837838s)
	I0706 21:22:38.343260    9892 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 21:22:38.365757    9892 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0706 21:22:38.383062    9892 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0706 21:22:38.399390    9892 provision.go:86] duration metric: configureAuth took 5.2592631s
	I0706 21:22:38.399390    9892 buildroot.go:189] setting minikube options for container-runtime
	I0706 21:22:38.399937    9892 config.go:182] Loaded profile config "running-upgrade-715200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0706 21:22:38.400125    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:22:39.110269    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:22:39.110790    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:39.110864    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:22:40.215349    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:22:40.215349    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:40.219765    9892 main.go:141] libmachine: Using SSH client type: native
	I0706 21:22:40.220515    9892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.232 22 <nil> <nil>}
	I0706 21:22:40.220515    9892 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 21:22:40.374579    9892 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 21:22:40.374683    9892 buildroot.go:70] root file system type: tmpfs
	I0706 21:22:40.374683    9892 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 21:22:40.374683    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:22:41.165346    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:22:41.165495    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:41.165534    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:22:42.244019    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:22:42.244402    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:42.249109    9892 main.go:141] libmachine: Using SSH client type: native
	I0706 21:22:42.249581    9892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.232 22 <nil> <nil>}
	I0706 21:22:42.249581    9892 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 21:22:42.390367    9892 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 21:22:42.390367    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:22:43.097176    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:22:43.097284    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:43.097284    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:22:44.119657    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:22:44.119657    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:44.124256    9892 main.go:141] libmachine: Using SSH client type: native
	I0706 21:22:44.125240    9892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.232 22 <nil> <nil>}
	I0706 21:22:44.125240    9892 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 21:22:57.876524    9892 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service
	+++ /lib/systemd/system/docker.service.new
	@@ -3,9 +3,12 @@
	 Documentation=https://docs.docker.com
	 After=network.target  minikube-automount.service docker.socket
	 Requires= minikube-automount.service docker.socket 
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	+Restart=on-failure
	 
	 
	 
	@@ -21,7 +24,7 @@
	 # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	 ExecStart=
	 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	-ExecReload=/bin/kill -s HUP 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 21:22:57.876590    9892 machine.go:91] provisioned docker machine in 28.5384238s
	I0706 21:22:57.876647    9892 start.go:300] post-start starting for "running-upgrade-715200" (driver="hyperv")
	I0706 21:22:57.876647    9892 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 21:22:57.886652    9892 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 21:22:57.886652    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:22:58.605875    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:22:58.605914    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:58.606037    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:22:59.694109    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:22:59.694377    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:22:59.694848    9892 sshutil.go:53] new ssh client: &{IP:172.29.70.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\running-upgrade-715200\id_rsa Username:docker}
	I0706 21:22:59.859043    9892 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.9723772s)
	I0706 21:22:59.872546    9892 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 21:22:59.883717    9892 info.go:137] Remote host: Buildroot 2019.02.7
	I0706 21:22:59.883717    9892 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0706 21:22:59.884242    9892 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0706 21:22:59.885073    9892 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> 82562.pem in /etc/ssl/certs
	I0706 21:22:59.896198    9892 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 21:22:59.926403    9892 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /etc/ssl/certs/82562.pem (1708 bytes)
	I0706 21:22:59.982403    9892 start.go:303] post-start completed in 2.1057415s
	I0706 21:22:59.982447    9892 fix.go:56] fixHost completed within 31.5691054s
	I0706 21:22:59.982491    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:23:00.691571    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:23:00.691571    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:23:00.691571    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:23:01.674508    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:23:01.674508    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:23:01.679099    9892 main.go:141] libmachine: Using SSH client type: native
	I0706 21:23:01.680005    9892 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.70.232 22 <nil> <nil>}
	I0706 21:23:01.680145    9892 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0706 21:23:01.808287    9892 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688678581.801219158
	
	I0706 21:23:01.808287    9892 fix.go:206] guest clock: 1688678581.801219158
	I0706 21:23:01.808287    9892 fix.go:219] Guest: 2023-07-06 21:23:01.801219158 +0000 UTC Remote: 2023-07-06 21:22:59.9824475 +0000 UTC m=+132.692864201 (delta=1.818771658s)
	I0706 21:23:01.808421    9892 fix.go:190] guest clock delta is within tolerance: 1.818771658s
	I0706 21:23:01.808477    9892 start.go:83] releasing machines lock for "running-upgrade-715200", held for 33.3952256s
	I0706 21:23:01.808655    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:23:02.504025    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:23:02.504241    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:23:02.504241    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:23:03.614769    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:23:03.614874    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:23:03.618291    9892 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 21:23:03.618291    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:23:03.629190    9892 ssh_runner.go:195] Run: cat /version.json
	I0706 21:23:03.629190    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM running-upgrade-715200 ).state
	I0706 21:23:04.348247    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:23:04.348305    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:23:04.348402    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:23:04.363268    9892 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:23:04.363268    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:23:04.363268    9892 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM running-upgrade-715200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:23:05.490298    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:23:05.490381    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:23:05.490381    9892 sshutil.go:53] new ssh client: &{IP:172.29.70.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\running-upgrade-715200\id_rsa Username:docker}
	I0706 21:23:05.550324    9892 main.go:141] libmachine: [stdout =====>] : 172.29.70.232
	
	I0706 21:23:05.550324    9892 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:23:05.550324    9892 sshutil.go:53] new ssh client: &{IP:172.29.70.232 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\running-upgrade-715200\id_rsa Username:docker}
	I0706 21:23:05.652113    9892 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.0338071s)
	I0706 21:23:05.655800    9892 ssh_runner.go:235] Completed: cat /version.json: (2.0265954s)
	W0706 21:23:05.655938    9892 start.go:483] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0706 21:23:05.666212    9892 ssh_runner.go:195] Run: systemctl --version
	I0706 21:23:05.692849    9892 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0706 21:23:05.703453    9892 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 21:23:05.715748    9892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0706 21:23:05.750432    9892 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0706 21:23:05.762278    9892 cni.go:311] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0706 21:23:05.762341    9892 start.go:466] detecting cgroup driver to use...
	I0706 21:23:05.762673    9892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 21:23:05.796440    9892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0706 21:23:05.816704    9892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 21:23:05.827250    9892 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 21:23:05.836325    9892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 21:23:05.856371    9892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 21:23:05.877513    9892 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 21:23:05.902610    9892 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 21:23:05.921792    9892 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 21:23:05.942199    9892 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 21:23:05.960748    9892 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 21:23:05.980052    9892 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 21:23:06.002910    9892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:23:06.232461    9892 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 21:23:06.259556    9892 start.go:466] detecting cgroup driver to use...
	I0706 21:23:06.273752    9892 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 21:23:06.300019    9892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 21:23:06.324437    9892 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 21:23:06.400965    9892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 21:23:06.427070    9892 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 21:23:06.443333    9892 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 21:23:06.474082    9892 ssh_runner.go:195] Run: which cri-dockerd
	I0706 21:23:06.496358    9892 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 21:23:06.504162    9892 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 21:23:06.528613    9892 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 21:23:06.737645    9892 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 21:23:06.992767    9892 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 21:23:06.992767    9892 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 21:23:07.021172    9892 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:23:07.289385    9892 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 21:23:19.057041    9892 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.7675702s)
	I0706 21:23:19.059705    9892 out.go:177] 
	W0706 21:23:19.062444    9892 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0706 21:23:19.062444    9892 out.go:239] * 
	* 
	W0706 21:23:19.063528    9892 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 21:23:19.066882    9892 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:144: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p running-upgrade-715200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-06 21:23:19.1373733 +0000 UTC m=+4801.606265901
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-715200 -n running-upgrade-715200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p running-upgrade-715200 -n running-upgrade-715200: exit status 6 (4.4952494s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0706 21:23:23.566554    9572 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-715200" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-715200" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-715200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-715200
E0706 21:23:31.919839    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-715200: (38.331691s)
--- FAIL: TestRunningBinaryUpgrade (462.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (44.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-504800 --no-kubernetes --driver=hyperv
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-504800 --no-kubernetes --driver=hyperv: exit status 1 (21.2371285s)

                                                
                                                
-- stdout --
	* [NoKubernetes-504800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-504800

                                                
                                                
-- /stdout --
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-504800 --no-kubernetes --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-504800 -n NoKubernetes-504800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-504800 -n NoKubernetes-504800: (5.1602466s)
helpers_test.go:244: <<< TestNoKubernetes/serial/StartWithStopK8s FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-504800 logs -n 25
E0706 21:21:31.251490    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-504800 logs -n 25: (12.4368359s)
helpers_test.go:252: TestNoKubernetes/serial/StartWithStopK8s logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| node    | add -p multinode-144300        | multinode-144300          | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:06 UTC |                     |
	| delete  | -p multinode-144300-m03        | multinode-144300-m03      | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:06 UTC | 06 Jul 23 21:07 UTC |
	| delete  | -p multinode-144300            | multinode-144300          | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:07 UTC | 06 Jul 23 21:08 UTC |
	| start   | -p test-preload-852700         | test-preload-852700       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:08 UTC | 06 Jul 23 21:10 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr              |                           |                   |         |                     |                     |
	|         | --wait=true --preload=false    |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.24.4   |                           |                   |         |                     |                     |
	| image   | test-preload-852700 image pull | test-preload-852700       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:10 UTC | 06 Jul 23 21:10 UTC |
	|         | gcr.io/k8s-minikube/busybox    |                           |                   |         |                     |                     |
	| stop    | -p test-preload-852700         | test-preload-852700       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:10 UTC | 06 Jul 23 21:10 UTC |
	| start   | -p test-preload-852700         | test-preload-852700       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:10 UTC | 06 Jul 23 21:12 UTC |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --wait=true --driver=hyperv    |                           |                   |         |                     |                     |
	| image   | test-preload-852700 image list | test-preload-852700       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:12 UTC | 06 Jul 23 21:12 UTC |
	| delete  | -p test-preload-852700         | test-preload-852700       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:12 UTC | 06 Jul 23 21:13 UTC |
	| start   | -p scheduled-stop-095800       | scheduled-stop-095800     | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:13 UTC | 06 Jul 23 21:14 UTC |
	|         | --memory=2048 --driver=hyperv  |                           |                   |         |                     |                     |
	| stop    | -p scheduled-stop-095800       | scheduled-stop-095800     | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:14 UTC | 06 Jul 23 21:14 UTC |
	|         | --schedule 5m                  |                           |                   |         |                     |                     |
	| ssh     | -p scheduled-stop-095800       | scheduled-stop-095800     | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:14 UTC | 06 Jul 23 21:14 UTC |
	|         | -- sudo systemctl show         |                           |                   |         |                     |                     |
	|         | minikube-scheduled-stop        |                           |                   |         |                     |                     |
	|         | --no-page                      |                           |                   |         |                     |                     |
	| stop    | -p scheduled-stop-095800       | scheduled-stop-095800     | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:14 UTC | 06 Jul 23 21:14 UTC |
	|         | --schedule 5s                  |                           |                   |         |                     |                     |
	| delete  | -p scheduled-stop-095800       | scheduled-stop-095800     | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:16 UTC | 06 Jul 23 21:16 UTC |
	| start   | -p force-systemd-flag-504800   | force-systemd-flag-504800 | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:16 UTC | 06 Jul 23 21:18 UTC |
	|         | --memory=2048 --force-systemd  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-504800         | NoKubernetes-504800       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:16 UTC |                     |
	|         | --no-kubernetes                |                           |                   |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p offline-docker-910700       | offline-docker-910700     | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:16 UTC | 06 Jul 23 21:19 UTC |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --memory=2048 --wait=true      |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-504800         | NoKubernetes-504800       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:16 UTC | 06 Jul 23 21:20 UTC |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| ssh     | force-systemd-flag-504800      | force-systemd-flag-504800 | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:18 UTC | 06 Jul 23 21:18 UTC |
	|         | ssh docker info --format       |                           |                   |         |                     |                     |
	|         | {{.CgroupDriver}}              |                           |                   |         |                     |                     |
	| delete  | -p force-systemd-flag-504800   | force-systemd-flag-504800 | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:18 UTC | 06 Jul 23 21:18 UTC |
	| start   | -p docker-flags-630100         | docker-flags-630100       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:18 UTC |                     |
	|         | --cache-images=false           |                           |                   |         |                     |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --install-addons=false         |                           |                   |         |                     |                     |
	|         | --wait=false                   |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR           |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT           |                           |                   |         |                     |                     |
	|         | --docker-opt=debug             |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| delete  | -p offline-docker-910700       | offline-docker-910700     | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:19 UTC | 06 Jul 23 21:20 UTC |
	| start   | -p cert-expiration-861000      | cert-expiration-861000    | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:20 UTC |                     |
	|         | --memory=2048                  |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-715200      | running-upgrade-715200    | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:20 UTC |                     |
	|         | --memory=2200                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	| start   | -p NoKubernetes-504800         | NoKubernetes-504800       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:20 UTC |                     |
	|         | --no-kubernetes                |                           |                   |         |                     |                     |
	|         | --driver=hyperv                |                           |                   |         |                     |                     |
	|---------|--------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 21:20:58
	Running on machine: minikube6
	Binary: Built with gc go1.20.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 21:20:58.432381    4124 out.go:296] Setting OutFile to fd 1656 ...
	I0706 21:20:58.499732    4124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:20:58.499732    4124 out.go:309] Setting ErrFile to fd 1668...
	I0706 21:20:58.499732    4124 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:20:58.529408    4124 out.go:303] Setting JSON to false
	I0706 21:20:58.532527    4124 start.go:127] hostinfo: {"hostname":"minikube6","uptime":496595,"bootTime":1688181863,"procs":155,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 21:20:58.532734    4124 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 21:20:58.537166    4124 out.go:177] * [NoKubernetes-504800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 21:20:58.541960    4124 notify.go:220] Checking for updates...
	I0706 21:20:58.545129    4124 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:20:58.546463    4124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 21:20:58.552638    4124 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 21:20:58.555207    4124 out.go:177]   - MINIKUBE_LOCATION=16832
	I0706 21:20:58.557730    4124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 21:20:58.561175    4124 config.go:182] Loaded profile config "NoKubernetes-504800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:20:58.561932    4124 start.go:1841] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0706 21:20:58.561932    4124 start.go:1762] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0706 21:20:58.561932    4124 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 21:21:00.109746    4124 out.go:177] * Using the hyperv driver based on existing profile
	I0706 21:21:00.113128    4124 start.go:297] selected driver: hyperv
	I0706 21:21:00.113128    4124 start.go:944] validating driver "hyperv" against &{Name:NoKubernetes-504800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-504800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.79.150 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 21:21:00.113128    4124 start.go:955] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 21:21:00.113128    4124 start.go:1841] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0706 21:21:00.163676    4124 cni.go:84] Creating CNI manager for ""
	I0706 21:21:00.163676    4124 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 21:21:00.163676    4124 start_flags.go:319] config:
	{Name:NoKubernetes-504800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-504800 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.79.150 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 21:21:00.163676    4124 start.go:1841] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0706 21:21:00.164792    4124 start.go:1841] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0706 21:21:00.164792    4124 iso.go:125] acquiring lock: {Name:mk7608081596fbafb2a8a8984d26768bdf174467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:21:00.167497    4124 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-504800
	I0706 21:20:56.397787   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:20:56.397866   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:20:56.402117   10740 main.go:141] libmachine: Using SSH client type: native
	I0706 21:20:56.402625   10740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.64.149 22 <nil> <nil>}
	I0706 21:20:56.402625   10740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdocker-flags-630100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 docker-flags-630100/g' /etc/hosts;
				else 
					echo '127.0.1.1 docker-flags-630100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 21:20:56.557773   10740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 21:20:56.557849   10740 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0706 21:20:56.557914   10740 buildroot.go:174] setting up certificates
	I0706 21:20:56.557914   10740 provision.go:83] configureAuth start
	I0706 21:20:56.557914   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:20:57.308125   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:20:57.308183   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:20:57.308269   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:20:58.354621   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:20:58.354766   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:20:58.354797   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:20:59.110402   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:20:59.110402   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:20:59.110676   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:00.170637   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:00.170859   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:00.170913   10740 provision.go:138] copyHostCerts
	I0706 21:21:00.170975   10740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0706 21:21:00.171247   10740 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0706 21:21:00.171365   10740 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0706 21:21:00.171961   10740 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0706 21:21:00.172801   10740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0706 21:21:00.172801   10740 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0706 21:21:00.173361   10740 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0706 21:21:00.173689   10740 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0706 21:21:00.174357   10740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0706 21:21:00.174357   10740 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0706 21:21:00.174357   10740 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0706 21:21:00.175186   10740 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0706 21:21:00.176623   10740 provision.go:112] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.docker-flags-630100 san=[172.29.64.149 172.29.64.149 localhost 127.0.0.1 minikube docker-flags-630100]
	I0706 21:21:00.620866   10740 provision.go:172] copyRemoteCerts
	I0706 21:21:00.629383   10740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 21:21:00.629383   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:21:00.172409    4124 preload.go:132] Checking if preload exists for k8s version v0.0.0 and runtime docker
	W0706 21:21:00.215403    4124 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0706 21:21:00.215661    4124 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\NoKubernetes-504800\config.json ...
	I0706 21:21:00.218378    4124 start.go:365] acquiring machines lock for NoKubernetes-504800: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 21:21:01.327415   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:21:01.327415   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:01.327415   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:02.291591   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:02.291661   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:02.291893   10740 sshutil.go:53] new ssh client: &{IP:172.29.64.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-630100\id_rsa Username:docker}
	I0706 21:21:02.396084   10740 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.7666881s)
	I0706 21:21:02.396225   10740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0706 21:21:02.396274   10740 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 21:21:02.439188   10740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0706 21:21:02.439335   10740 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0706 21:21:02.475676   10740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0706 21:21:02.475676   10740 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0706 21:21:02.509497   10740 provision.go:86] duration metric: configureAuth took 5.9515401s
	I0706 21:21:02.509497   10740 buildroot.go:189] setting minikube options for container-runtime
	I0706 21:21:02.510085   10740 config.go:182] Loaded profile config "docker-flags-630100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:21:02.510272   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:21:03.202360   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:21:03.202360   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:03.202450   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:04.169953   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:04.170046   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:04.173717   10740 main.go:141] libmachine: Using SSH client type: native
	I0706 21:21:04.174565   10740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.64.149 22 <nil> <nil>}
	I0706 21:21:04.174565   10740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 21:21:04.309506   10740 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 21:21:04.309506   10740 buildroot.go:70] root file system type: tmpfs
	I0706 21:21:04.309896   10740 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 21:21:04.309977   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:21:04.967142   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:21:04.967142   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:04.967142   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:05.943925   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:05.944178   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:05.947640   10740 main.go:141] libmachine: Using SSH client type: native
	I0706 21:21:05.948383   10740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.64.149 22 <nil> <nil>}
	I0706 21:21:05.948977   10740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="FOO=BAR"
	Environment="BAZ=BAT"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 21:21:06.110023   10740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=FOO=BAR
	Environment=BAZ=BAT
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 --debug --icc=true 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 21:21:06.110023   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:21:06.781318   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:21:06.781476   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:06.781476   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:07.749515   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:07.749515   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:07.754844   10740 main.go:141] libmachine: Using SSH client type: native
	I0706 21:21:07.755570   10740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.64.149 22 <nil> <nil>}
	I0706 21:21:07.755570   10740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 21:21:08.814493   10740 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 21:21:08.814567   10740 machine.go:91] provisioned docker machine in 16.5975187s
	I0706 21:21:08.814629   10740 client.go:171] LocalClient.Create took 1m10.6857542s
	I0706 21:21:08.814629   10740 start.go:167] duration metric: libmachine.API.Create for "docker-flags-630100" took 1m10.6857542s
	I0706 21:21:08.814689   10740 start.go:300] post-start starting for "docker-flags-630100" (driver="hyperv")
	I0706 21:21:08.814689   10740 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 21:21:08.824205   10740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 21:21:08.824205   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:21:09.492513   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:21:09.492513   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:09.492513   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:10.418584   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:10.418894   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:10.418894   10740 sshutil.go:53] new ssh client: &{IP:172.29.64.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-630100\id_rsa Username:docker}
	I0706 21:21:10.528656   10740 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.704353s)
	I0706 21:21:10.538573   10740 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 21:21:10.544728   10740 info.go:137] Remote host: Buildroot 2021.02.12
	I0706 21:21:10.544728   10740 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0706 21:21:10.544728   10740 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0706 21:21:10.546129   10740 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> 82562.pem in /etc/ssl/certs
	I0706 21:21:10.546129   10740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> /etc/ssl/certs/82562.pem
	I0706 21:21:10.556006   10740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 21:21:10.568858   10740 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /etc/ssl/certs/82562.pem (1708 bytes)
	I0706 21:21:10.594712   10740 start.go:303] post-start completed in 1.7800106s
	I0706 21:21:10.603313   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:21:13.976066   11192 start.go:369] acquired machines lock for "cert-expiration-861000" in 34.8382243s
	I0706 21:21:13.976066   11192 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-861000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.27.3 ClusterName:cert-expiration-861000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 21:21:13.976604   11192 start.go:125] createHost starting for "" (driver="hyperv")
	I0706 21:21:11.261331   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:21:11.261331   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:11.261331   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:12.197661   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:12.197661   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:12.198354   10740 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\docker-flags-630100\config.json ...
	I0706 21:21:12.201094   10740 start.go:128] duration metric: createHost completed in 1m14.0776748s
	I0706 21:21:12.201094   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:21:12.883434   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:21:12.883434   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:12.883434   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:13.835626   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:13.835626   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:13.839764   10740 main.go:141] libmachine: Using SSH client type: native
	I0706 21:21:13.840364   10740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.64.149 22 <nil> <nil>}
	I0706 21:21:13.840364   10740 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0706 21:21:13.975807   10740 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688678473.975737902
	
	I0706 21:21:13.975807   10740 fix.go:206] guest clock: 1688678473.975737902
	I0706 21:21:13.975807   10740 fix.go:219] Guest: 2023-07-06 21:21:13.975737902 +0000 UTC Remote: 2023-07-06 21:21:12.2010949 +0000 UTC m=+136.706901901 (delta=1.774643002s)
	I0706 21:21:13.975929   10740 fix.go:190] guest clock delta is within tolerance: 1.774643002s
	I0706 21:21:13.975973   10740 start.go:83] releasing machines lock for "docker-flags-630100", held for 1m15.853272s
	I0706 21:21:13.976066   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:21:14.661090   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:21:14.661090   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:14.661200   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:13.982025   11192 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0706 21:21:13.982025   11192 start.go:159] libmachine.API.Create for "cert-expiration-861000" (driver="hyperv")
	I0706 21:21:13.982025   11192 client.go:168] LocalClient.Create starting
	I0706 21:21:13.983057   11192 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0706 21:21:13.983263   11192 main.go:141] libmachine: Decoding PEM data...
	I0706 21:21:13.983263   11192 main.go:141] libmachine: Parsing certificate...
	I0706 21:21:13.983263   11192 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0706 21:21:13.983263   11192 main.go:141] libmachine: Decoding PEM data...
	I0706 21:21:13.983263   11192 main.go:141] libmachine: Parsing certificate...
	I0706 21:21:13.983263   11192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0706 21:21:14.364557   11192 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0706 21:21:14.364557   11192 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:14.364638   11192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0706 21:21:14.941449   11192 main.go:141] libmachine: [stdout =====>] : False
	
	I0706 21:21:14.941536   11192 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:14.941536   11192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0706 21:21:15.400219   11192 main.go:141] libmachine: [stdout =====>] : True
	
	I0706 21:21:15.400219   11192 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:15.400219   11192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0706 21:21:16.899565   11192 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0706 21:21:16.899565   11192 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:16.901696   11192 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.30.1-1688144767-16765-amd64.iso...
	I0706 21:21:17.263443   11192 main.go:141] libmachine: Creating SSH key...
	I0706 21:21:15.646722   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:15.646722   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:15.649860   10740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 21:21:15.650022   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:21:15.656214   10740 ssh_runner.go:195] Run: cat /version.json
	I0706 21:21:15.657748   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-630100 ).state
	I0706 21:21:16.364814   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:21:16.364960   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:16.365061   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:16.412072   10740 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:21:16.412258   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:16.412258   10740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-630100 ).networkadapters[0]).ipaddresses[0]
	I0706 21:21:17.427055   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:17.427299   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:17.427299   10740 sshutil.go:53] new ssh client: &{IP:172.29.64.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-630100\id_rsa Username:docker}
	I0706 21:21:17.474771   10740 main.go:141] libmachine: [stdout =====>] : 172.29.64.149
	
	I0706 21:21:17.474922   10740 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:17.475398   10740 sshutil.go:53] new ssh client: &{IP:172.29.64.149 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\docker-flags-630100\id_rsa Username:docker}
	I0706 21:21:17.599763   10740 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (1.949741s)
	I0706 21:21:17.599851   10740 ssh_runner.go:235] Completed: cat /version.json: (1.9436229s)
	I0706 21:21:17.609732   10740 ssh_runner.go:195] Run: systemctl --version
	I0706 21:21:17.628092   10740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0706 21:21:17.631519   10740 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 21:21:17.646118   10740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0706 21:21:17.670481   10740 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0706 21:21:17.670481   10740 start.go:466] detecting cgroup driver to use...
	I0706 21:21:17.670481   10740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 21:21:17.708013   10740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0706 21:21:17.740876   10740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 21:21:17.749486   10740 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 21:21:17.770215   10740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 21:21:17.801279   10740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 21:21:17.828457   10740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 21:21:17.859427   10740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 21:21:17.886495   10740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 21:21:17.912609   10740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 21:21:17.940277   10740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 21:21:17.968710   10740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 21:21:17.994429   10740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:21:18.146133   10740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 21:21:18.171299   10740 start.go:466] detecting cgroup driver to use...
	I0706 21:21:18.181651   10740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 21:21:18.209122   10740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 21:21:18.234253   10740 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 21:21:18.267332   10740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 21:21:18.295597   10740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 21:21:18.323091   10740 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0706 21:21:18.375432   10740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 21:21:18.393753   10740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 21:21:18.431403   10740 ssh_runner.go:195] Run: which cri-dockerd
	I0706 21:21:18.446342   10740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 21:21:18.461176   10740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 21:21:18.499270   10740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 21:21:18.645406   10740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 21:21:18.789944   10740 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 21:21:18.789974   10740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 21:21:18.824013   10740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:21:18.989990   10740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 21:21:20.633873   10740 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.6431412s)
	I0706 21:21:20.647539   10740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 21:21:20.800526   10740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0706 21:21:20.959433   10740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 21:21:21.112086   10740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:21:21.285427   10740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0706 21:21:21.325457   10740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:21:21.484244   10740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0706 21:21:21.581983   10740 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0706 21:21:21.594466   10740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0706 21:21:21.602281   10740 start.go:534] Will wait 60s for crictl version
	I0706 21:21:21.610679   10740 ssh_runner.go:195] Run: which crictl
	I0706 21:21:21.624707   10740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0706 21:21:21.675301   10740 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0706 21:21:21.681713   10740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 21:21:21.723898   10740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 21:21:21.764517   10740 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0706 21:21:21.768000   10740 out.go:177]   - opt debug
	I0706 21:21:21.772085   10740 out.go:177]   - opt icc=true
	I0706 21:21:21.774567   10740 out.go:177]   - env FOO=BAR
	I0706 21:21:21.777538   10740 out.go:177]   - env BAZ=BAT
	I0706 21:21:17.416706   11192 main.go:141] libmachine: Creating VM...
	I0706 21:21:17.416706   11192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0706 21:21:18.886859   11192 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0706 21:21:18.886920   11192 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:18.887007   11192 main.go:141] libmachine: Using switch "Default Switch"
	I0706 21:21:18.887098   11192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0706 21:21:19.471289   11192 main.go:141] libmachine: [stdout =====>] : True
	
	I0706 21:21:19.471859   11192 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:19.471859   11192 main.go:141] libmachine: Creating VHD
	I0706 21:21:19.471859   11192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-861000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0706 21:21:21.181083   11192 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-861000\fix
	                          ed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 05A5B5E8-9988-4638-BDB1-FF492C188C68
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0706 21:21:21.181315   11192 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:21:21.181315   11192 main.go:141] libmachine: Writing magic tar header
	I0706 21:21:21.181386   11192 main.go:141] libmachine: Writing SSH key tar header
	I0706 21:21:21.189845   11192 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-861000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\cert-expiration-861000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0706 21:21:21.779807   10740 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0706 21:21:21.784954   10740 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0706 21:21:21.785605   10740 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0706 21:21:21.785605   10740 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0706 21:21:21.785605   10740 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:76:79 Flags:up|broadcast|multicast|running}
	I0706 21:21:21.788364   10740 ip.go:210] interface addr: fe80::9492:57c6:5513:d3cc/64
	I0706 21:21:21.788364   10740 ip.go:210] interface addr: 172.29.64.1/20
	I0706 21:21:21.797023   10740 ssh_runner.go:195] Run: grep 172.29.64.1	host.minikube.internal$ /etc/hosts
	I0706 21:21:21.798662   10740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.64.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0706 21:21:21.823339   10740 localpath.go:92] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\client.crt -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\docker-flags-630100\client.crt
	I0706 21:21:21.824242   10740 localpath.go:117] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\client.key -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\docker-flags-630100\client.key
	I0706 21:21:21.825635   10740 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 21:21:21.833661   10740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 21:21:21.860486   10740 docker.go:636] Got preloaded images: 
	I0706 21:21:21.860486   10740 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0706 21:21:21.870296   10740 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0706 21:21:21.894909   10740 ssh_runner.go:195] Run: which lz4
	I0706 21:21:21.899536   10740 vm_assets.go:163] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0706 21:21:21.909854   10740 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0706 21:21:21.917265   10740 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0706 21:21:21.917342   10740 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (412285949 bytes)
	I0706 21:21:24.427712   10740 docker.go:600] Took 2.528158 seconds to copy over tarball
	I0706 21:21:24.438350   10740 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-07-06 21:19:27 UTC, ends at Thu 2023-07-06 21:21:31 UTC. --
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.136306930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.269129573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.269311214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.269346522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.269899345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:00 NoKubernetes-504800 cri-dockerd[1194]: time="2023-07-06T21:21:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d4757fee8e6c804f9096200ca99bdee09ee814cfdd85634efea74bbaa574292c/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.433075363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.433209392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.433489855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.433535765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.466297677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.466619549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.466744777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:21:00 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:00.466882308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:00 NoKubernetes-504800 cri-dockerd[1194]: time="2023-07-06T21:21:00Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cd30e71d14a9a61007bfaf8aca287f5c4bba83d44b3ca14aed2d18d6d9879e2f/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 21:21:01 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:01.139512006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:21:01 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:01.140073829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:01 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:01.140183152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:21:01 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:01.140287475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:01 NoKubernetes-504800 cri-dockerd[1194]: time="2023-07-06T21:21:01Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8758b2d0c13c2bd7e4c72b0355a74e91f01a2d9a2f56c18b54619a172c17190a/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 21:21:01 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:01.518490452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:21:01 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:01.518635784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:01 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:01.519067378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:21:01 NoKubernetes-504800 dockerd[1303]: time="2023-07-06T21:21:01.519181703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:21:08 NoKubernetes-504800 cri-dockerd[1194]: time="2023-07-06T21:21:08Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9128ef1237a80       6e38f40d628db       31 seconds ago      Running             storage-provisioner       0                   8758b2d0c13c2
	168eb2d234e7e       ead0a4a53df89       31 seconds ago      Running             coredns                   0                   cd30e71d14a9a
	5fdaec52979c8       5780543258cf0       32 seconds ago      Running             kube-proxy                0                   d4757fee8e6c8
	452edc89136bd       41697ceeb70b3       54 seconds ago      Running             kube-scheduler            0                   761cc766f36d8
	36d7719b8a089       86b6af7dd652c       54 seconds ago      Running             etcd                      0                   dfd7c91ddf361
	3c8b143f9ed7c       7cffc01dba0e1       54 seconds ago      Running             kube-controller-manager   0                   0a2dba116d516
	30c1f58154092       08a0c939e61b7       55 seconds ago      Running             kube-apiserver            0                   0f00e6e29f591
	
	* 
	* ==> coredns [168eb2d234e7] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1008319858cd8849366c0f5555156ab5ce20cd98fedc211c6675234f8e435bfd28cd4ed3ec9afaafbad6dd8b85ab8681d4da6cc55eede0ec805bf7bd7719a5c3
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44561 - 40044 "HINFO IN 1824103211821218950.7384597343903997323. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072597227s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.000124] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.640144] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.560804] systemd-fstab-generator[113]: Ignoring "noauto" for root device
	[  +1.248671] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +7.927094] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +14.966743] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.140538] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[Jul 6 21:20] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[  +0.515824] systemd-fstab-generator[957]: Ignoring "noauto" for root device
	[  +0.162369] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.181035] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +1.308551] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.358072] systemd-fstab-generator[1139]: Ignoring "noauto" for root device
	[  +0.144372] systemd-fstab-generator[1150]: Ignoring "noauto" for root device
	[  +0.178497] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.146169] systemd-fstab-generator[1172]: Ignoring "noauto" for root device
	[  +0.174687] systemd-fstab-generator[1186]: Ignoring "noauto" for root device
	[  +6.872901] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[ +13.183502] kauditd_printk_skb: 29 callbacks suppressed
	[  +6.262172] systemd-fstab-generator[1612]: Ignoring "noauto" for root device
	[  +0.776333] kauditd_printk_skb: 29 callbacks suppressed
	[ +14.107912] systemd-fstab-generator[2639]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [36d7719b8a08] <==
	* {"level":"info","ts":"2023-07-06T21:21:26.622Z","caller":"traceutil/trace.go:171","msg":"trace[1715344787] linearizableReadLoop","detail":"{readStateIndex:413; appliedIndex:412; }","duration":"384.014995ms","start":"2023-07-06T21:21:26.238Z","end":"2023-07-06T21:21:26.622Z","steps":["trace[1715344787] 'read index received'  (duration: 26.614219ms)","trace[1715344787] 'applied index is now lower than readState.Index'  (duration: 357.399676ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-06T21:21:26.622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"384.197416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-06T21:21:26.622Z","caller":"traceutil/trace.go:171","msg":"trace[421937773] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"384.23582ms","start":"2023-07-06T21:21:26.238Z","end":"2023-07-06T21:21:26.622Z","steps":["trace[421937773] 'agreement among raft nodes before linearized reading'  (duration: 384.14941ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-06T21:21:26.622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"206.655414ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-07-06T21:21:26.622Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-06T21:21:26.237Z","time spent":"384.280526ms","remote":"127.0.0.1:57690","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2023-07-06T21:21:26.622Z","caller":"traceutil/trace.go:171","msg":"trace[1693453230] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:393; }","duration":"206.680916ms","start":"2023-07-06T21:21:26.415Z","end":"2023-07-06T21:21:26.622Z","steps":["trace[1693453230] 'agreement among raft nodes before linearized reading'  (duration: 206.62571ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-06T21:21:26.622Z","caller":"traceutil/trace.go:171","msg":"trace[81349825] transaction","detail":"{read_only:false; response_revision:393; number_of_response:1; }","duration":"636.265897ms","start":"2023-07-06T21:21:25.986Z","end":"2023-07-06T21:21:26.622Z","steps":["trace[81349825] 'process raft request'  (duration: 278.443571ms)","trace[81349825] 'compare'  (duration: 357.124244ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-06T21:21:26.622Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-06T21:21:25.986Z","time spent":"636.304901ms","remote":"127.0.0.1:57724","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":596,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:391 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:523 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-07-06T21:21:28.217Z","caller":"traceutil/trace.go:171","msg":"trace[489746543] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"161.243973ms","start":"2023-07-06T21:21:28.056Z","end":"2023-07-06T21:21:28.217Z","steps":["trace[489746543] 'process raft request'  (duration: 160.965941ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-06T21:21:28.827Z","caller":"traceutil/trace.go:171","msg":"trace[1662905188] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"194.4903ms","start":"2023-07-06T21:21:28.633Z","end":"2023-07-06T21:21:28.827Z","steps":["trace[1662905188] 'process raft request'  (duration: 194.31048ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-06T21:21:30.414Z","caller":"traceutil/trace.go:171","msg":"trace[1158975122] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"385.909114ms","start":"2023-07-06T21:21:30.028Z","end":"2023-07-06T21:21:30.414Z","steps":["trace[1158975122] 'process raft request'  (duration: 385.766198ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-06T21:21:30.414Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-07-06T21:21:30.028Z","time spent":"386.06083ms","remote":"127.0.0.1:57708","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":707,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-5d78c9869d-bgshh.176f641d148d7bb9\" mod_revision:388 > success:<request_put:<key:\"/registry/events/kube-system/coredns-5d78c9869d-bgshh.176f641d148d7bb9\" value_size:619 lease:1491123778340345925 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-5d78c9869d-bgshh.176f641d148d7bb9\" > >"}
	{"level":"warn","ts":"2023-07-06T21:21:30.966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"248.003015ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10714495815195122126 > lease_revoke:<id:14b1892d141d5995>","response":"size:28"}
	{"level":"info","ts":"2023-07-06T21:21:30.966Z","caller":"traceutil/trace.go:171","msg":"trace[108070541] linearizableReadLoop","detail":"{readStateIndex:417; appliedIndex:416; }","duration":"132.343803ms","start":"2023-07-06T21:21:30.834Z","end":"2023-07-06T21:21:30.966Z","steps":["trace[108070541] 'read index received'  (duration: 29.203µs)","trace[108070541] 'applied index is now lower than readState.Index'  (duration: 132.3131ms)"],"step_count":2}
	{"level":"warn","ts":"2023-07-06T21:21:30.966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.472816ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:611"}
	{"level":"info","ts":"2023-07-06T21:21:30.966Z","caller":"traceutil/trace.go:171","msg":"trace[235029641] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:396; }","duration":"132.523021ms","start":"2023-07-06T21:21:30.834Z","end":"2023-07-06T21:21:30.966Z","steps":["trace[235029641] 'agreement among raft nodes before linearized reading'  (duration: 132.405609ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-06T21:21:31.622Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.35678ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-06T21:21:31.622Z","caller":"traceutil/trace.go:171","msg":"trace[828194956] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:397; }","duration":"207.500796ms","start":"2023-07-06T21:21:31.415Z","end":"2023-07-06T21:21:31.622Z","steps":["trace[828194956] 'range keys from in-memory index tree'  (duration: 207.276372ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-06T21:21:33.161Z","caller":"traceutil/trace.go:171","msg":"trace[1734640195] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"126.812771ms","start":"2023-07-06T21:21:33.034Z","end":"2023-07-06T21:21:33.161Z","steps":["trace[1734640195] 'process raft request'  (duration: 126.598949ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-06T21:21:33.674Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"260.014384ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-06T21:21:33.675Z","caller":"traceutil/trace.go:171","msg":"trace[2111624784] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:398; }","duration":"260.122594ms","start":"2023-07-06T21:21:33.414Z","end":"2023-07-06T21:21:33.675Z","steps":["trace[2111624784] 'range keys from in-memory index tree'  (duration: 259.899272ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-06T21:21:33.867Z","caller":"traceutil/trace.go:171","msg":"trace[955576114] linearizableReadLoop","detail":"{readStateIndex:420; appliedIndex:419; }","duration":"190.56989ms","start":"2023-07-06T21:21:33.676Z","end":"2023-07-06T21:21:33.867Z","steps":["trace[955576114] 'read index received'  (duration: 190.253359ms)","trace[955576114] 'applied index is now lower than readState.Index'  (duration: 315.831µs)"],"step_count":2}
	{"level":"warn","ts":"2023-07-06T21:21:33.867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.75751ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-07-06T21:21:33.867Z","caller":"traceutil/trace.go:171","msg":"trace[1031807385] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:399; }","duration":"190.801414ms","start":"2023-07-06T21:21:33.676Z","end":"2023-07-06T21:21:33.867Z","steps":["trace[1031807385] 'agreement among raft nodes before linearized reading'  (duration: 190.673301ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-06T21:21:33.867Z","caller":"traceutil/trace.go:171","msg":"trace[1666172789] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"259.890272ms","start":"2023-07-06T21:21:33.607Z","end":"2023-07-06T21:21:33.867Z","steps":["trace[1666172789] 'process raft request'  (duration: 259.333315ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  21:21:36 up 2 min,  0 users,  load average: 1.52, 0.70, 0.27
	Linux NoKubernetes-504800 5.10.57 #1 SMP Fri Jun 30 21:41:53 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [30c1f5815409] <==
	* I0706 21:20:43.869260       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0706 21:20:45.028341       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0706 21:20:45.126760       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0706 21:20:45.304350       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0706 21:20:45.326193       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [172.29.79.150]
	I0706 21:20:45.327378       1 controller.go:624] quota admission added evaluator for: endpoints
	I0706 21:20:45.335753       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0706 21:20:45.948985       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0706 21:20:47.032604       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0706 21:20:47.077249       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0706 21:20:47.107116       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0706 21:20:59.247402       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0706 21:20:59.554376       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0706 21:21:25.976535       1 trace.go:219] Trace[887683167]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.29.79.150,type:*v1.Endpoints,resource:apiServerIPInfo (06-Jul-2023 21:21:25.298) (total time: 677ms):
	Trace[887683167]: ---"Transaction prepared" 123ms (21:21:25.422)
	Trace[887683167]: ---"Txn call completed" 553ms (21:21:25.976)
	Trace[887683167]: [677.974546ms] [677.974546ms] END
	I0706 21:21:26.623145       1 trace.go:219] Trace[703341778]: "Update" accept:application/json, */*,audit-id:7643c362-91c8-4fd7-bfb4-61d4e6e3d084,client:172.29.79.150,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (06-Jul-2023 21:21:25.984) (total time: 638ms):
	Trace[703341778]: ["GuaranteedUpdate etcd3" audit-id:7643c362-91c8-4fd7-bfb4-61d4e6e3d084,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 638ms (21:21:25.984)
	Trace[703341778]:  ---"Txn call completed" 637ms (21:21:26.622)]
	Trace[703341778]: [638.499762ms] [638.499762ms] END
	I0706 21:21:36.139063       1 trace.go:219] Trace[2018368864]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.29.79.150,type:*v1.Endpoints,resource:apiServerIPInfo (06-Jul-2023 21:21:35.573) (total time: 566ms):
	Trace[2018368864]: ---"Transaction prepared" 205ms (21:21:35.780)
	Trace[2018368864]: ---"Txn call completed" 358ms (21:21:36.138)
	Trace[2018368864]: [566.004134ms] [566.004134ms] END
	
	* 
	* ==> kube-controller-manager [3c8b143f9ed7] <==
	* I0706 21:20:59.197455       1 taint_manager.go:211] "Sending events to api server"
	I0706 21:20:59.198052       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0706 21:20:59.201091       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="nokubernetes-504800"
	I0706 21:20:59.201437       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0706 21:20:59.199938       1 shared_informer.go:318] Caches are synced for disruption
	I0706 21:20:59.200256       1 event.go:307] "Event occurred" object="nokubernetes-504800" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node nokubernetes-504800 event: Registered Node nokubernetes-504800 in Controller"
	I0706 21:20:59.203881       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0706 21:20:59.206909       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0706 21:20:59.207448       1 shared_informer.go:318] Caches are synced for node
	I0706 21:20:59.207511       1 range_allocator.go:174] "Sending events to api server"
	I0706 21:20:59.207595       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0706 21:20:59.207601       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0706 21:20:59.207666       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0706 21:20:59.223927       1 shared_informer.go:318] Caches are synced for resource quota
	I0706 21:20:59.238565       1 shared_informer.go:318] Caches are synced for daemon sets
	I0706 21:20:59.258056       1 shared_informer.go:318] Caches are synced for resource quota
	I0706 21:20:59.266692       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 1"
	I0706 21:20:59.272886       1 range_allocator.go:380] "Set node PodCIDR" node="nokubernetes-504800" podCIDRs=[10.244.0.0/24]
	I0706 21:20:59.293964       1 shared_informer.go:318] Caches are synced for service account
	I0706 21:20:59.328762       1 shared_informer.go:318] Caches are synced for namespace
	I0706 21:20:59.577661       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qsjx6"
	I0706 21:20:59.684576       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-bgshh"
	I0706 21:20:59.704129       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 21:20:59.704230       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0706 21:20:59.726631       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [5fdaec52979c] <==
	* I0706 21:21:00.913043       1 node.go:141] Successfully retrieved node IP: 172.29.79.150
	I0706 21:21:00.913191       1 server_others.go:110] "Detected node IP" address="172.29.79.150"
	I0706 21:21:00.913210       1 server_others.go:554] "Using iptables proxy"
	I0706 21:21:01.022240       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0706 21:21:01.022344       1 server_others.go:192] "Using iptables Proxier"
	I0706 21:21:01.022425       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0706 21:21:01.026265       1 server.go:658] "Version info" version="v1.27.3"
	I0706 21:21:01.026297       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 21:21:01.049541       1 config.go:315] "Starting node config controller"
	I0706 21:21:01.049601       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0706 21:21:01.060140       1 config.go:188] "Starting service config controller"
	I0706 21:21:01.060634       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0706 21:21:01.062581       1 config.go:97] "Starting endpoint slice config controller"
	I0706 21:21:01.062804       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0706 21:21:01.150495       1 shared_informer.go:318] Caches are synced for node config
	I0706 21:21:01.164354       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0706 21:21:01.164447       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [452edc89136b] <==
	* W0706 21:20:44.061062       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0706 21:20:44.061116       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0706 21:20:44.083634       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0706 21:20:44.083872       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0706 21:20:44.195429       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0706 21:20:44.195461       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0706 21:20:44.197210       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0706 21:20:44.197290       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0706 21:20:44.341096       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0706 21:20:44.341132       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0706 21:20:44.391366       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0706 21:20:44.391864       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0706 21:20:44.392062       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0706 21:20:44.392307       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0706 21:20:44.404958       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0706 21:20:44.405078       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0706 21:20:44.447576       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0706 21:20:44.447625       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0706 21:20:44.465147       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0706 21:20:44.465250       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0706 21:20:44.569728       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0706 21:20:44.569763       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0706 21:20:44.616707       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0706 21:20:44.616770       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0706 21:20:46.920104       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-07-06 21:19:27 UTC, ends at Thu 2023-07-06 21:21:37 UTC. --
	Jul 06 21:20:48 NoKubernetes-504800 kubelet[2674]: I0706 21:20:48.574285    2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-nokubernetes-504800" podStartSLOduration=1.574185456 podCreationTimestamp="2023-07-06 21:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 21:20:48.54976537 +0000 UTC m=+1.568361532" watchObservedRunningTime="2023-07-06 21:20:48.574185456 +0000 UTC m=+1.592781518"
	Jul 06 21:20:48 NoKubernetes-504800 kubelet[2674]: I0706 21:20:48.591237    2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-nokubernetes-504800" podStartSLOduration=4.591146725 podCreationTimestamp="2023-07-06 21:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 21:20:48.574739628 +0000 UTC m=+1.593335790" watchObservedRunningTime="2023-07-06 21:20:48.591146725 +0000 UTC m=+1.609742887"
	Jul 06 21:20:48 NoKubernetes-504800 kubelet[2674]: I0706 21:20:48.612754    2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-nokubernetes-504800" podStartSLOduration=1.612466447 podCreationTimestamp="2023-07-06 21:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 21:20:48.591532044 +0000 UTC m=+1.610128106" watchObservedRunningTime="2023-07-06 21:20:48.612466447 +0000 UTC m=+1.631062509"
	Jul 06 21:20:48 NoKubernetes-504800 kubelet[2674]: I0706 21:20:48.644712    2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-nokubernetes-504800" podStartSLOduration=1.644653046 podCreationTimestamp="2023-07-06 21:20:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 21:20:48.614507081 +0000 UTC m=+1.633103243" watchObservedRunningTime="2023-07-06 21:20:48.644653046 +0000 UTC m=+1.663249108"
	Jul 06 21:20:50 NoKubernetes-504800 kubelet[2674]: I0706 21:20:50.958174    2674 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.252410    2674 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.432653    2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ea820f79-f5f0-4b47-947e-d4780a0b14f8-tmp\") pod \"storage-provisioner\" (UID: \"ea820f79-f5f0-4b47-947e-d4780a0b14f8\") " pod="kube-system/storage-provisioner"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.432870    2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvpfm\" (UniqueName: \"kubernetes.io/projected/ea820f79-f5f0-4b47-947e-d4780a0b14f8-kube-api-access-hvpfm\") pod \"storage-provisioner\" (UID: \"ea820f79-f5f0-4b47-947e-d4780a0b14f8\") " pod="kube-system/storage-provisioner"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: E0706 21:20:59.541892    2674 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: E0706 21:20:59.541927    2674 projected.go:198] Error preparing data for projected volume kube-api-access-hvpfm for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: E0706 21:20:59.541989    2674 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ea820f79-f5f0-4b47-947e-d4780a0b14f8-kube-api-access-hvpfm podName:ea820f79-f5f0-4b47-947e-d4780a0b14f8 nodeName:}" failed. No retries permitted until 2023-07-06 21:21:00.041969792 +0000 UTC m=+13.060565854 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hvpfm" (UniqueName: "kubernetes.io/projected/ea820f79-f5f0-4b47-947e-d4780a0b14f8-kube-api-access-hvpfm") pod "storage-provisioner" (UID: "ea820f79-f5f0-4b47-947e-d4780a0b14f8") : configmap "kube-root-ca.crt" not found
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.594257    2674 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.710542    2674 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.734313    2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr5x8\" (UniqueName: \"kubernetes.io/projected/0e1ba3b6-45ec-452e-93d2-72e7a879d60d-kube-api-access-cr5x8\") pod \"kube-proxy-qsjx6\" (UID: \"0e1ba3b6-45ec-452e-93d2-72e7a879d60d\") " pod="kube-system/kube-proxy-qsjx6"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.734362    2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e1ba3b6-45ec-452e-93d2-72e7a879d60d-kube-proxy\") pod \"kube-proxy-qsjx6\" (UID: \"0e1ba3b6-45ec-452e-93d2-72e7a879d60d\") " pod="kube-system/kube-proxy-qsjx6"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.734406    2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e1ba3b6-45ec-452e-93d2-72e7a879d60d-xtables-lock\") pod \"kube-proxy-qsjx6\" (UID: \"0e1ba3b6-45ec-452e-93d2-72e7a879d60d\") " pod="kube-system/kube-proxy-qsjx6"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.734430    2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e1ba3b6-45ec-452e-93d2-72e7a879d60d-lib-modules\") pod \"kube-proxy-qsjx6\" (UID: \"0e1ba3b6-45ec-452e-93d2-72e7a879d60d\") " pod="kube-system/kube-proxy-qsjx6"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.835401    2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3222889a-9a5f-420e-bb7c-611bf4641216-config-volume\") pod \"coredns-5d78c9869d-bgshh\" (UID: \"3222889a-9a5f-420e-bb7c-611bf4641216\") " pod="kube-system/coredns-5d78c9869d-bgshh"
	Jul 06 21:20:59 NoKubernetes-504800 kubelet[2674]: I0706 21:20:59.835645    2674 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtv2b\" (UniqueName: \"kubernetes.io/projected/3222889a-9a5f-420e-bb7c-611bf4641216-kube-api-access-rtv2b\") pod \"coredns-5d78c9869d-bgshh\" (UID: \"3222889a-9a5f-420e-bb7c-611bf4641216\") " pod="kube-system/coredns-5d78c9869d-bgshh"
	Jul 06 21:21:01 NoKubernetes-504800 kubelet[2674]: I0706 21:21:01.314934    2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8758b2d0c13c2bd7e4c72b0355a74e91f01a2d9a2f56c18b54619a172c17190a"
	Jul 06 21:21:01 NoKubernetes-504800 kubelet[2674]: I0706 21:21:01.335636    2674 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd30e71d14a9a61007bfaf8aca287f5c4bba83d44b3ca14aed2d18d6d9879e2f"
	Jul 06 21:21:02 NoKubernetes-504800 kubelet[2674]: I0706 21:21:02.377438    2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-bgshh" podStartSLOduration=3.37739614 podCreationTimestamp="2023-07-06 21:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 21:21:01.370361766 +0000 UTC m=+14.388957928" watchObservedRunningTime="2023-07-06 21:21:02.37739614 +0000 UTC m=+15.395992202"
	Jul 06 21:21:02 NoKubernetes-504800 kubelet[2674]: I0706 21:21:02.417798    2674 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.41776398 podCreationTimestamp="2023-07-06 21:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-06 21:21:02.377986264 +0000 UTC m=+15.396582326" watchObservedRunningTime="2023-07-06 21:21:02.41776398 +0000 UTC m=+15.436360042"
	Jul 06 21:21:08 NoKubernetes-504800 kubelet[2674]: I0706 21:21:08.433935    2674 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 06 21:21:08 NoKubernetes-504800 kubelet[2674]: I0706 21:21:08.434626    2674 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	
	* 
	* ==> storage-provisioner [9128ef1237a8] <==
	* I0706 21:21:01.611264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0706 21:21:01.628741       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0706 21:21:01.628984       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0706 21:21:01.640642       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0706 21:21:01.641383       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b1537ae7-f578-466a-9840-6dc2c51c7ba4", APIVersion:"v1", ResourceVersion:"364", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' NoKubernetes-504800_2118d367-0e8d-47a0-8015-4235f711c71a became leader
	I0706 21:21:01.641564       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_NoKubernetes-504800_2118d367-0e8d-47a0-8015-4235f711c71a!
	I0706 21:21:01.743054       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_NoKubernetes-504800_2118d367-0e8d-47a0-8015-4235f711c71a!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0706 21:21:33.699881   10132 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	 output: "\n** stderr ** \nsudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p NoKubernetes-504800 -n NoKubernetes-504800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p NoKubernetes-504800 -n NoKubernetes-504800: (4.8839853s)
helpers_test.go:261: (dbg) Run:  kubectl --context NoKubernetes-504800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestNoKubernetes/serial/StartWithStopK8s FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (44.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (357.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.6.2.1313778469.exe start -p stopped-upgrade-322600 --memory=2200 --vm-driver=hyperv
E0706 21:26:14.468449    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
version_upgrade_test.go:195: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.6.2.1313778469.exe start -p stopped-upgrade-322600 --memory=2200 --vm-driver=hyperv: (2m58.2265956s)
version_upgrade_test.go:204: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.6.2.1313778469.exe -p stopped-upgrade-322600 stop
version_upgrade_test.go:204: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.6.2.1313778469.exe -p stopped-upgrade-322600 stop: (20.2463379s)
version_upgrade_test.go:210: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-322600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:210: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p stopped-upgrade-322600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (2m39.076363s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-322600] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the hyperv driver based on existing profile
	* Starting control plane node stopped-upgrade-322600 in cluster stopped-upgrade-322600
	* Restarting existing hyperv VM for "stopped-upgrade-322600" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 21:29:05.104064   10604 out.go:296] Setting OutFile to fd 1928 ...
	I0706 21:29:05.168518   10604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:29:05.168518   10604 out.go:309] Setting ErrFile to fd 1564...
	I0706 21:29:05.168518   10604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:29:05.191620   10604 out.go:303] Setting JSON to false
	I0706 21:29:05.194689   10604 start.go:127] hostinfo: {"hostname":"minikube6","uptime":497082,"bootTime":1688181863,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 21:29:05.194689   10604 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 21:29:05.431222   10604 out.go:177] * [stopped-upgrade-322600] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 21:29:05.478726   10604 notify.go:220] Checking for updates...
	I0706 21:29:05.624993   10604 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:29:05.774693   10604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 21:29:05.973049   10604 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 21:29:06.174680   10604 out.go:177]   - MINIKUBE_LOCATION=16832
	I0706 21:29:06.310110   10604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 21:29:06.372910   10604 config.go:182] Loaded profile config "stopped-upgrade-322600": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0706 21:29:06.372910   10604 start_flags.go:683] config upgrade: Driver=hyperv
	I0706 21:29:06.372910   10604 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0706 21:29:06.372910   10604 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\stopped-upgrade-322600\config.json ...
	I0706 21:29:06.517537   10604 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0706 21:29:06.578909   10604 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 21:29:08.418503   10604 out.go:177] * Using the hyperv driver based on existing profile
	I0706 21:29:08.530759   10604 start.go:297] selected driver: hyperv
	I0706 21:29:08.530759   10604 start.go:944] validating driver "hyperv" against &{Name:stopped-upgrade-322600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.71.148 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0706 21:29:08.530759   10604 start.go:955] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 21:29:08.580478   10604 cni.go:84] Creating CNI manager for ""
	I0706 21:29:08.580478   10604 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 21:29:08.580478   10604 start_flags.go:319] config:
	{Name:stopped-upgrade-322600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.71.148 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 21:29:08.581258   10604 iso.go:125] acquiring lock: {Name:mk7608081596fbafb2a8a8984d26768bdf174467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:29:08.702546   10604 out.go:177] * Starting control plane node stopped-upgrade-322600 in cluster stopped-upgrade-322600
	I0706 21:29:08.775625   10604 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	W0706 21:29:08.830590   10604 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0706 21:29:08.839629   10604 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\stopped-upgrade-322600\config.json ...
	I0706 21:29:08.839629   10604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I0706 21:29:08.839629   10604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0
	I0706 21:29:08.839629   10604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0
	I0706 21:29:08.839629   10604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.4.3-0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0
	I0706 21:29:08.839629   10604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0
	I0706 21:29:08.839629   10604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0
	I0706 21:29:08.839629   10604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.1 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1
	I0706 21:29:08.839629   10604 localpath.go:146] windows sanitize: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns:1.6.5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5
	I0706 21:29:08.843304   10604 start.go:365] acquiring machines lock for stopped-upgrade-322600: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 21:29:09.093002   10604 cache.go:107] acquiring lock: {Name:mk05ded54209e4a708242c69b9964ff468668659 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:29:09.093379   10604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 exists
	I0706 21:29:09.093594   10604 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns_1.6.5" took 253.3769ms
	I0706 21:29:09.093686   10604 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns_1.6.5 succeeded
	I0706 21:29:09.094622   10604 cache.go:107] acquiring lock: {Name:mk973f8dac8863f6898376a57143f99d2ac9f288 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:29:09.094622   10604 cache.go:107] acquiring lock: {Name:mk954426991f5fcd2ab5db06d1d0131a8f1b324a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:29:09.094757   10604 cache.go:107] acquiring lock: {Name:mk99c024335b2b0df925d0a4f8be63420005fb4e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:29:09.094875   10604 cache.go:107] acquiring lock: {Name:mk9be06f7dda6cd9f88a49bfda7b93c646500d50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:29:09.094875   10604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 exists
	I0706 21:29:09.094875   10604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 exists
	I0706 21:29:09.094875   10604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 exists
	I0706 21:29:09.094875   10604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 exists
	I0706 21:29:09.094875   10604 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.4.3-0" took 255.2449ms
	I0706 21:29:09.094875   10604 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.17.0" took 254.6584ms
	I0706 21:29:09.094875   10604 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.4.3-0 succeeded
	I0706 21:29:09.094875   10604 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.17.0" took 254.6584ms
	I0706 21:29:09.094875   10604 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.17.0 succeeded
	I0706 21:29:09.094875   10604 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.17.0 succeeded
	I0706 21:29:09.094875   10604 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.17.0" took 255.2449ms
	I0706 21:29:09.094875   10604 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.17.0 succeeded
	I0706 21:29:09.106231   10604 cache.go:107] acquiring lock: {Name:mkf546fa1bb9082500c9826611ed614659cd3a1d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:29:09.106231   10604 cache.go:107] acquiring lock: {Name:mkc4fee9499fca2b7d7c08251407e6b74559e928 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:29:09.106231   10604 cache.go:107] acquiring lock: {Name:mke18763d5cf9d9bc5d35bbd720515199764b657 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:29:09.106440   10604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I0706 21:29:09.106523   10604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 exists
	I0706 21:29:09.106523   10604 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 exists
	I0706 21:29:09.106627   10604 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 266.9962ms
	I0706 21:29:09.106842   10604 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I0706 21:29:09.106627   10604 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.1" took 266.4097ms
	I0706 21:29:09.106842   10604 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.1 succeeded
	I0706 21:29:09.106842   10604 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.17.0" took 267.2113ms
	I0706 21:29:09.106842   10604 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.17.0 succeeded
	I0706 21:29:09.106842   10604 cache.go:87] Successfully saved all images to host disk.
	I0706 21:30:31.595614   10604 start.go:369] acquired machines lock for "stopped-upgrade-322600" in 1m22.7517147s
	I0706 21:30:31.595614   10604 start.go:96] Skipping create...Using existing machine configuration
	I0706 21:30:31.595614   10604 fix.go:54] fixHost starting: minikube
	I0706 21:30:31.596836   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:32.317961   10604 main.go:141] libmachine: [stdout =====>] : Off
	
	I0706 21:30:32.318040   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:32.318040   10604 fix.go:102] recreateIfNeeded on stopped-upgrade-322600: state=Stopped err=<nil>
	W0706 21:30:32.318113   10604 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 21:30:32.321503   10604 out.go:177] * Restarting existing hyperv VM for "stopped-upgrade-322600" ...
	I0706 21:30:32.326355   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM stopped-upgrade-322600
	I0706 21:30:33.966160   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:30:33.966160   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:33.966160   10604 main.go:141] libmachine: Waiting for host to start...
	I0706 21:30:33.966160   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:34.738599   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:30:34.738599   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:34.738599   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:30:35.760186   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:30:35.760186   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:36.763138   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:37.456600   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:30:37.456631   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:37.456888   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:30:38.441606   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:30:38.441606   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:39.451583   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:40.236416   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:30:40.236416   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:40.236416   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:30:41.332692   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:30:41.332692   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:42.337562   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:43.098305   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:30:43.098602   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:43.098602   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:30:44.111241   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:30:44.111241   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:45.125725   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:45.856583   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:30:45.856583   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:45.856662   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:30:46.872875   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:30:46.872875   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:47.880227   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:48.608307   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:30:48.608307   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:48.608307   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:30:49.592146   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:30:49.592146   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:50.607756   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:51.344937   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:30:51.344975   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:51.345011   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:30:52.356564   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:30:52.356729   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:53.372084   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:54.112310   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:30:54.112310   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:54.112310   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:30:55.092988   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:30:55.092988   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:56.093290   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:56.847624   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:30:56.847624   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:56.847789   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:30:57.854308   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:30:57.854458   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:58.855147   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:30:59.569622   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:30:59.569754   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:30:59.569827   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:00.552749   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:31:00.552923   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:01.565315   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:02.290851   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:02.290851   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:02.291390   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:03.274073   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:31:03.274240   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:04.285621   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:05.016088   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:05.016088   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:05.016161   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:06.027154   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:31:06.027154   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:07.034330   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:07.760102   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:07.760102   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:07.760102   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:08.781998   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:31:08.781998   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:09.795697   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:10.520610   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:10.520610   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:10.520610   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:11.516314   10604 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:31:11.516314   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:12.517042   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:13.256468   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:13.256732   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:13.256802   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:14.369933   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:14.369933   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:14.372855   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:15.144141   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:15.144141   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:15.144141   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:16.315678   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:16.315859   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:16.315859   10604 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\stopped-upgrade-322600\config.json ...
	I0706 21:31:16.318327   10604 machine.go:88] provisioning docker machine ...
	I0706 21:31:16.318327   10604 buildroot.go:166] provisioning hostname "stopped-upgrade-322600"
	I0706 21:31:16.318327   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:17.057262   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:17.057262   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:17.057262   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:18.106935   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:18.106935   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:18.111467   10604 main.go:141] libmachine: Using SSH client type: native
	I0706 21:31:18.112237   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.71.148 22 <nil> <nil>}
	I0706 21:31:18.112237   10604 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-322600 && echo "stopped-upgrade-322600" | sudo tee /etc/hostname
	I0706 21:31:18.247925   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-322600
	
	I0706 21:31:18.247983   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:18.972837   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:18.972837   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:18.972837   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:20.001549   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:20.001549   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:20.005372   10604 main.go:141] libmachine: Using SSH client type: native
	I0706 21:31:20.006350   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.71.148 22 <nil> <nil>}
	I0706 21:31:20.006415   10604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-322600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-322600/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-322600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 21:31:20.137268   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 21:31:20.137268   10604 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0706 21:31:20.137268   10604 buildroot.go:174] setting up certificates
	I0706 21:31:20.137268   10604 provision.go:83] configureAuth start
	I0706 21:31:20.137268   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:20.872023   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:20.872023   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:20.872023   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:21.960724   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:21.960724   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:21.960724   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:22.691060   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:22.691126   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:22.691126   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:23.754609   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:23.754666   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:23.754666   10604 provision.go:138] copyHostCerts
	I0706 21:31:23.754666   10604 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0706 21:31:23.754666   10604 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0706 21:31:23.755547   10604 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0706 21:31:23.756519   10604 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0706 21:31:23.756519   10604 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0706 21:31:23.756519   10604 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0706 21:31:23.758029   10604 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0706 21:31:23.758103   10604 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0706 21:31:23.758414   10604 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0706 21:31:23.759605   10604 provision.go:112] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.stopped-upgrade-322600 san=[172.29.71.148 172.29.71.148 localhost 127.0.0.1 minikube stopped-upgrade-322600]
	I0706 21:31:24.098611   10604 provision.go:172] copyRemoteCerts
	I0706 21:31:24.107637   10604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 21:31:24.107637   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:24.872743   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:24.872855   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:24.872966   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:25.977724   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:25.977842   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:25.977934   10604 sshutil.go:53] new ssh client: &{IP:172.29.71.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\stopped-upgrade-322600\id_rsa Username:docker}
	I0706 21:31:26.079994   10604 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.972342s)
	I0706 21:31:26.080390   10604 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0706 21:31:26.098965   10604 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 21:31:26.116099   10604 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0706 21:31:26.134097   10604 provision.go:86] duration metric: configureAuth took 5.9967165s
	I0706 21:31:26.134097   10604 buildroot.go:189] setting minikube options for container-runtime
	I0706 21:31:26.134097   10604 config.go:182] Loaded profile config "stopped-upgrade-322600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0706 21:31:26.134633   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:26.881993   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:26.881993   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:26.882064   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:27.959207   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:27.959269   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:27.964090   10604 main.go:141] libmachine: Using SSH client type: native
	I0706 21:31:27.964895   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.71.148 22 <nil> <nil>}
	I0706 21:31:27.964895   10604 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 21:31:28.091357   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 21:31:28.091357   10604 buildroot.go:70] root file system type: tmpfs
	I0706 21:31:28.091890   10604 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 21:31:28.092080   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:28.815550   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:28.815609   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:28.815609   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:29.809736   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:29.809736   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:29.813417   10604 main.go:141] libmachine: Using SSH client type: native
	I0706 21:31:29.814363   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.71.148 22 <nil> <nil>}
	I0706 21:31:29.814363   10604 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 21:31:29.948176   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 21:31:29.948176   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:30.629411   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:30.629411   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:30.629506   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:31.644490   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:31.644490   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:31.648279   10604 main.go:141] libmachine: Using SSH client type: native
	I0706 21:31:31.649115   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.71.148 22 <nil> <nil>}
	I0706 21:31:31.649115   10604 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 21:31:32.856722   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0706 21:31:32.856789   10604 machine.go:91] provisioned docker machine in 16.5383418s
	I0706 21:31:32.856789   10604 start.go:300] post-start starting for "stopped-upgrade-322600" (driver="hyperv")
	I0706 21:31:32.856853   10604 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 21:31:32.866895   10604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 21:31:32.866895   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:33.563648   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:33.563648   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:33.563648   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:34.612337   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:34.612337   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:34.612478   10604 sshutil.go:53] new ssh client: &{IP:172.29.71.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\stopped-upgrade-322600\id_rsa Username:docker}
	I0706 21:31:34.701190   10604 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (1.8342816s)
	I0706 21:31:34.711263   10604 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 21:31:34.718031   10604 info.go:137] Remote host: Buildroot 2019.02.7
	I0706 21:31:34.718031   10604 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0706 21:31:34.718585   10604 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0706 21:31:34.720342   10604 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> 82562.pem in /etc/ssl/certs
	I0706 21:31:34.731337   10604 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 21:31:34.740652   10604 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /etc/ssl/certs/82562.pem (1708 bytes)
	I0706 21:31:34.761968   10604 start.go:303] post-start completed in 1.9050126s
	I0706 21:31:34.761968   10604 fix.go:56] fixHost completed within 1m3.1658964s
	I0706 21:31:34.762059   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:35.495711   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:35.495711   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:35.495860   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:36.539913   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:36.540094   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:36.544531   10604 main.go:141] libmachine: Using SSH client type: native
	I0706 21:31:36.545904   10604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.71.148 22 <nil> <nil>}
	I0706 21:31:36.545904   10604 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0706 21:31:36.672202   10604 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688679096.667623660
	
	I0706 21:31:36.672202   10604 fix.go:206] guest clock: 1688679096.667623660
	I0706 21:31:36.672202   10604 fix.go:219] Guest: 2023-07-06 21:31:36.66762366 +0000 UTC Remote: 2023-07-06 21:31:34.7620595 +0000 UTC m=+149.735082701 (delta=1.90556416s)
	I0706 21:31:36.672280   10604 fix.go:190] guest clock delta is within tolerance: 1.90556416s
	I0706 21:31:36.672280   10604 start.go:83] releasing machines lock for "stopped-upgrade-322600", held for 1m5.0761938s
	I0706 21:31:36.672563   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:37.434911   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:37.434911   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:37.435016   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:38.497103   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:38.497103   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:38.501339   10604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 21:31:38.501467   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:38.511675   10604 ssh_runner.go:195] Run: cat /version.json
	I0706 21:31:38.511675   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM stopped-upgrade-322600 ).state
	I0706 21:31:39.253703   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:39.253981   10604 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:31:39.253981   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:39.253981   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:39.254076   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:39.254107   10604 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM stopped-upgrade-322600 ).networkadapters[0]).ipaddresses[0]
	I0706 21:31:40.397485   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:40.397485   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:40.397485   10604 sshutil.go:53] new ssh client: &{IP:172.29.71.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\stopped-upgrade-322600\id_rsa Username:docker}
	I0706 21:31:40.459971   10604 main.go:141] libmachine: [stdout =====>] : 172.29.71.148
	
	I0706 21:31:40.460036   10604 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:31:40.460036   10604 sshutil.go:53] new ssh client: &{IP:172.29.71.148 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\stopped-upgrade-322600\id_rsa Username:docker}
	I0706 21:31:40.497809   10604 ssh_runner.go:235] Completed: cat /version.json: (1.9861202s)
	W0706 21:31:40.498807   10604 start.go:483] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0706 21:31:40.506847   10604 ssh_runner.go:195] Run: systemctl --version
	I0706 21:31:40.617862   10604 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.1164198s)
	I0706 21:31:40.628234   10604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0706 21:31:40.635522   10604 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 21:31:40.646042   10604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0706 21:31:40.663019   10604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0706 21:31:40.671151   10604 cni.go:311] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0706 21:31:40.671151   10604 start.go:466] detecting cgroup driver to use...
	I0706 21:31:40.671520   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 21:31:40.699635   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0706 21:31:40.716259   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 21:31:40.723680   10604 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 21:31:40.732836   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 21:31:40.750207   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 21:31:40.766851   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 21:31:40.784865   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 21:31:40.803622   10604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 21:31:40.820405   10604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 21:31:40.838442   10604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 21:31:40.853379   10604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 21:31:40.869267   10604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:31:40.985327   10604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 21:31:41.006815   10604 start.go:466] detecting cgroup driver to use...
	I0706 21:31:41.017525   10604 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 21:31:41.046333   10604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 21:31:41.068848   10604 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 21:31:41.865044   10604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 21:31:41.889524   10604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 21:31:41.908356   10604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 21:31:41.933594   10604 ssh_runner.go:195] Run: which cri-dockerd
	I0706 21:31:41.948905   10604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 21:31:41.956507   10604 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 21:31:41.979746   10604 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 21:31:42.111882   10604 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 21:31:42.212465   10604 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 21:31:42.212522   10604 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 21:31:42.234526   10604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:31:42.343957   10604 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 21:31:43.424668   10604 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0807036s)
	I0706 21:31:43.528771   10604 out.go:177] 
	W0706 21:31:43.544730   10604 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0706 21:31:43.544829   10604 out.go:239] * 
	* 
	W0706 21:31:43.546219   10604 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0706 21:31:43.677830   10604 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:212: upgrade from v1.6.2 to HEAD failed: out/minikube-windows-amd64.exe start -p stopped-upgrade-322600 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (357.68s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (142.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-815300 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-815300 --alsologtostderr -v=1 --driver=hyperv: (1m45.1182353s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-815300] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting control plane node pause-815300 in cluster pause-815300
	* Updating the running hyperv "pause-815300" VM ...
	* Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	  - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	* Done! kubectl is now configured to use "pause-815300" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 21:27:04.015195    4688 out.go:296] Setting OutFile to fd 1976 ...
	I0706 21:27:04.084559    4688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:27:04.084559    4688 out.go:309] Setting ErrFile to fd 1672...
	I0706 21:27:04.084559    4688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:27:04.110399    4688 out.go:303] Setting JSON to false
	I0706 21:27:04.111848    4688 start.go:127] hostinfo: {"hostname":"minikube6","uptime":496961,"bootTime":1688181863,"procs":162,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 21:27:04.111848    4688 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 21:27:04.115030    4688 out.go:177] * [pause-815300] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 21:27:04.123890    4688 notify.go:220] Checking for updates...
	I0706 21:27:04.126769    4688 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:27:04.130826    4688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 21:27:04.134734    4688 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 21:27:04.137472    4688 out.go:177]   - MINIKUBE_LOCATION=16832
	I0706 21:27:04.140257    4688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 21:27:04.145829    4688 config.go:182] Loaded profile config "pause-815300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:27:04.147006    4688 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 21:27:05.836449    4688 out.go:177] * Using the hyperv driver based on existing profile
	I0706 21:27:05.838776    4688 start.go:297] selected driver: hyperv
	I0706 21:27:05.838823    4688 start.go:944] validating driver "hyperv" against &{Name:pause-815300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.3 ClusterName:pause-815300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.72.136 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:f
alse olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 21:27:05.838823    4688 start.go:955] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 21:27:05.892116    4688 cni.go:84] Creating CNI manager for ""
	I0706 21:27:05.892116    4688 cni.go:152] "hyperv" driver + "docker" runtime found, recommending bridge
	I0706 21:27:05.892116    4688 start_flags.go:319] config:
	{Name:pause-815300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:pause-815300 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.72.136 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 21:27:05.892659    4688 iso.go:125] acquiring lock: {Name:mk7608081596fbafb2a8a8984d26768bdf174467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:27:05.897809    4688 out.go:177] * Starting control plane node pause-815300 in cluster pause-815300
	I0706 21:27:05.900398    4688 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 21:27:05.900659    4688 preload.go:148] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0706 21:27:05.900659    4688 cache.go:57] Caching tarball of preloaded images
	I0706 21:27:05.901113    4688 preload.go:174] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0706 21:27:05.901422    4688 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 21:27:05.901822    4688 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-815300\config.json ...
	I0706 21:27:05.904761    4688 start.go:365] acquiring machines lock for pause-815300: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 21:27:05.904942    4688 start.go:369] acquired machines lock for "pause-815300" in 43.7µs
	I0706 21:27:05.905145    4688 start.go:96] Skipping create...Using existing machine configuration
	I0706 21:27:05.905145    4688 fix.go:54] fixHost starting: 
	I0706 21:27:05.906125    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:06.683217    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:06.683257    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:06.683257    4688 fix.go:102] recreateIfNeeded on pause-815300: state=Running err=<nil>
	W0706 21:27:06.683257    4688 fix.go:128] unexpected machine state, will restart: <nil>
	I0706 21:27:06.687416    4688 out.go:177] * Updating the running hyperv "pause-815300" VM ...
	I0706 21:27:06.688875    4688 machine.go:88] provisioning docker machine ...
	I0706 21:27:06.688875    4688 buildroot.go:166] provisioning hostname "pause-815300"
	I0706 21:27:06.688875    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:07.446442    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:07.446442    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:07.446442    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:08.462220    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:08.462220    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:08.466655    4688 main.go:141] libmachine: Using SSH client type: native
	I0706 21:27:08.467562    4688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.72.136 22 <nil> <nil>}
	I0706 21:27:08.467617    4688 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-815300 && echo "pause-815300" | sudo tee /etc/hostname
	I0706 21:27:08.633914    4688 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-815300
	
	I0706 21:27:08.633914    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:09.326555    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:09.327134    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:09.327205    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:10.329935    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:10.330001    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:10.334147    4688 main.go:141] libmachine: Using SSH client type: native
	I0706 21:27:10.334927    4688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.72.136 22 <nil> <nil>}
	I0706 21:27:10.334927    4688 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-815300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-815300/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-815300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0706 21:27:10.486181    4688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 21:27:10.486289    4688 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0706 21:27:10.486289    4688 buildroot.go:174] setting up certificates
	I0706 21:27:10.486400    4688 provision.go:83] configureAuth start
	I0706 21:27:10.486400    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:11.170876    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:11.170876    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:11.171092    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:12.206722    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:12.206722    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:12.206722    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:12.921735    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:12.921792    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:12.921871    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:13.968128    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:13.968279    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:13.968279    4688 provision.go:138] copyHostCerts
	I0706 21:27:13.968864    4688 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0706 21:27:13.968864    4688 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0706 21:27:13.969449    4688 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0706 21:27:13.971418    4688 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0706 21:27:13.971473    4688 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0706 21:27:13.972001    4688 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0706 21:27:13.973648    4688 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0706 21:27:13.973711    4688 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0706 21:27:13.974160    4688 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0706 21:27:13.975697    4688 provision.go:112] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.pause-815300 san=[172.29.72.136 172.29.72.136 localhost 127.0.0.1 minikube pause-815300]
	I0706 21:27:14.195063    4688 provision.go:172] copyRemoteCerts
	I0706 21:27:14.199905    4688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0706 21:27:14.199905    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:14.971937    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:14.972161    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:14.972285    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:16.106320    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:16.106320    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:16.106939    4688 sshutil.go:53] new ssh client: &{IP:172.29.72.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-815300\id_rsa Username:docker}
	I0706 21:27:16.230741    4688 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.0308213s)
	I0706 21:27:16.231210    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 21:27:16.278901    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0706 21:27:16.323519    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0706 21:27:16.391249    4688 provision.go:86] duration metric: configureAuth took 5.9046922s
	I0706 21:27:16.391249    4688 buildroot.go:189] setting minikube options for container-runtime
	I0706 21:27:16.391471    4688 config.go:182] Loaded profile config "pause-815300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:27:16.392113    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:17.173774    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:17.173774    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:17.173774    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:18.423442    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:18.423513    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:18.430071    4688 main.go:141] libmachine: Using SSH client type: native
	I0706 21:27:18.431240    4688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.72.136 22 <nil> <nil>}
	I0706 21:27:18.431298    4688 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 21:27:18.581990    4688 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 21:27:18.581990    4688 buildroot.go:70] root file system type: tmpfs
	I0706 21:27:18.581990    4688 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 21:27:18.581990    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:19.461723    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:19.461796    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:19.461853    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:20.609993    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:20.610207    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:20.616501    4688 main.go:141] libmachine: Using SSH client type: native
	I0706 21:27:20.617974    4688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.72.136 22 <nil> <nil>}
	I0706 21:27:20.618124    4688 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 21:27:20.795299    4688 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 21:27:20.795299    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:21.566999    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:21.567127    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:21.567127    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:22.704444    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:22.704523    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:22.712695    4688 main.go:141] libmachine: Using SSH client type: native
	I0706 21:27:22.713617    4688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.72.136 22 <nil> <nil>}
	I0706 21:27:22.713617    4688 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0706 21:27:22.871297    4688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0706 21:27:22.871297    4688 machine.go:91] provisioned docker machine in 16.1823049s
	I0706 21:27:22.871297    4688 start.go:300] post-start starting for "pause-815300" (driver="hyperv")
	I0706 21:27:22.871297    4688 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0706 21:27:22.880799    4688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0706 21:27:22.880869    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:23.680893    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:23.680893    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:23.680893    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:24.935902    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:24.935902    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:24.935902    4688 sshutil.go:53] new ssh client: &{IP:172.29.72.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-815300\id_rsa Username:docker}
	I0706 21:27:25.047600    4688 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (2.1667161s)
	I0706 21:27:25.058044    4688 ssh_runner.go:195] Run: cat /etc/os-release
	I0706 21:27:25.066438    4688 info.go:137] Remote host: Buildroot 2021.02.12
	I0706 21:27:25.066438    4688 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0706 21:27:25.066438    4688 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0706 21:27:25.068065    4688 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem -> 82562.pem in /etc/ssl/certs
	I0706 21:27:25.082376    4688 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0706 21:27:25.101364    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /etc/ssl/certs/82562.pem (1708 bytes)
	I0706 21:27:25.150164    4688 start.go:303] post-start completed in 2.2788508s
	I0706 21:27:25.151582    4688 fix.go:56] fixHost completed within 19.2462988s
	I0706 21:27:25.151929    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:25.954653    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:25.954843    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:25.954843    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:26.963401    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:26.963401    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:26.967642    4688 main.go:141] libmachine: Using SSH client type: native
	I0706 21:27:26.968538    4688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.72.136 22 <nil> <nil>}
	I0706 21:27:26.968538    4688 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0706 21:27:27.115597    4688 main.go:141] libmachine: SSH cmd err, output: <nil>: 1688678847.116674255
	
	I0706 21:27:27.115709    4688 fix.go:206] guest clock: 1688678847.116674255
	I0706 21:27:27.115709    4688 fix.go:219] Guest: 2023-07-06 21:27:27.116674255 +0000 UTC Remote: 2023-07-06 21:27:25.1518256 +0000 UTC m=+21.209205701 (delta=1.964848655s)
	I0706 21:27:27.115709    4688 fix.go:190] guest clock delta is within tolerance: 1.964848655s
	I0706 21:27:27.115709    4688 start.go:83] releasing machines lock for "pause-815300", held for 21.2106145s
	I0706 21:27:27.115966    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:27.867044    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:27.867044    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:27.867171    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:28.955204    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:28.955319    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:28.959711    4688 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0706 21:27:28.959874    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:28.970110    4688 ssh_runner.go:195] Run: cat /version.json
	I0706 21:27:28.970110    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-815300 ).state
	I0706 21:27:29.819552    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:29.819610    4688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:27:29.819673    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:29.819673    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:29.819726    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:29.819726    4688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-815300 ).networkadapters[0]).ipaddresses[0]
	I0706 21:27:31.056699    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:31.056960    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:31.057383    4688 sshutil.go:53] new ssh client: &{IP:172.29.72.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-815300\id_rsa Username:docker}
	I0706 21:27:31.079172    4688 main.go:141] libmachine: [stdout =====>] : 172.29.72.136
	
	I0706 21:27:31.079235    4688 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:27:31.079645    4688 sshutil.go:53] new ssh client: &{IP:172.29.72.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-815300\id_rsa Username:docker}
	I0706 21:27:31.168042    4688 ssh_runner.go:235] Completed: cat /version.json: (2.1979159s)
	I0706 21:27:31.182689    4688 ssh_runner.go:195] Run: systemctl --version
	I0706 21:27:31.261407    4688 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (2.3015979s)
	I0706 21:27:31.276077    4688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0706 21:27:31.286081    4688 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0706 21:27:31.301504    4688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0706 21:27:31.324182    4688 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0706 21:27:31.324290    4688 start.go:466] detecting cgroup driver to use...
	I0706 21:27:31.324837    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 21:27:31.381106    4688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0706 21:27:31.423786    4688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0706 21:27:31.442453    4688 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0706 21:27:31.457037    4688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0706 21:27:31.484703    4688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 21:27:31.512774    4688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0706 21:27:31.541607    4688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0706 21:27:31.576541    4688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0706 21:27:31.609729    4688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0706 21:27:31.640416    4688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0706 21:27:31.669639    4688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0706 21:27:31.697047    4688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:27:31.914181    4688 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0706 21:27:31.950248    4688 start.go:466] detecting cgroup driver to use...
	I0706 21:27:31.962849    4688 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0706 21:27:31.998287    4688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 21:27:32.051430    4688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0706 21:27:32.098828    4688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0706 21:27:32.143777    4688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0706 21:27:32.179471    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0706 21:27:32.220071    4688 ssh_runner.go:195] Run: which cri-dockerd
	I0706 21:27:32.239549    4688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0706 21:27:32.257315    4688 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0706 21:27:32.303391    4688 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0706 21:27:32.545983    4688 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0706 21:27:32.758772    4688 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0706 21:27:32.758901    4688 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0706 21:27:32.804464    4688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:27:33.022851    4688 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0706 21:27:46.176816    4688 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.1538095s)
	I0706 21:27:46.190162    4688 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 21:27:46.374929    4688 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0706 21:27:46.578854    4688 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0706 21:27:46.769371    4688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:27:46.941381    4688 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0706 21:27:46.989030    4688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0706 21:27:47.188157    4688 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0706 21:27:47.317495    4688 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0706 21:27:47.337559    4688 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0706 21:27:47.350269    4688 start.go:534] Will wait 60s for crictl version
	I0706 21:27:47.362842    4688 ssh_runner.go:195] Run: which crictl
	I0706 21:27:47.379597    4688 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0706 21:27:47.446754    4688 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0706 21:27:47.454027    4688 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 21:27:47.498128    4688 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0706 21:27:47.671447    4688 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0706 21:27:47.672278    4688 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0706 21:27:47.681773    4688 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0706 21:27:47.681868    4688 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0706 21:27:47.681868    4688 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0706 21:27:47.681868    4688 ip.go:207] Found interface: {Index:10 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:93:76:79 Flags:up|broadcast|multicast|running}
	I0706 21:27:47.685772    4688 ip.go:210] interface addr: fe80::9492:57c6:5513:d3cc/64
	I0706 21:27:47.685772    4688 ip.go:210] interface addr: 172.29.64.1/20
	I0706 21:27:47.696361    4688 ssh_runner.go:195] Run: grep 172.29.64.1	host.minikube.internal$ /etc/hosts
	I0706 21:27:47.698640    4688 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 21:27:47.708206    4688 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 21:27:47.750738    4688 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0706 21:27:47.750794    4688 docker.go:566] Images already preloaded, skipping extraction
	I0706 21:27:47.758787    4688 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0706 21:27:47.790800    4688 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0706 21:27:47.790859    4688 cache_images.go:84] Images are preloaded, skipping loading
	I0706 21:27:47.799986    4688 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0706 21:27:47.838837    4688 cni.go:84] Creating CNI manager for ""
	I0706 21:27:47.838993    4688 cni.go:152] "hyperv" driver + "docker" runtime found, recommending bridge
	I0706 21:27:47.839065    4688 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0706 21:27:47.839136    4688 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.72.136 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-815300 NodeName:pause-815300 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.72.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.72.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0706 21:27:47.839762    4688 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.72.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "pause-815300"
	  kubeletExtraArgs:
	    node-ip: 172.29.72.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.72.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0706 21:27:47.839934    4688 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=pause-815300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.72.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:pause-815300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0706 21:27:47.851993    4688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0706 21:27:47.867666    4688 binaries.go:44] Found k8s binaries, skipping transfer
	I0706 21:27:47.880509    4688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0706 21:27:47.894558    4688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0706 21:27:47.923941    4688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0706 21:27:47.949371    4688 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0706 21:27:47.997344    4688 ssh_runner.go:195] Run: grep 172.29.72.136	control-plane.minikube.internal$ /etc/hosts
	I0706 21:27:48.004617    4688 certs.go:56] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-815300 for IP: 172.29.72.136
	I0706 21:27:48.004719    4688 certs.go:190] acquiring lock for shared ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 21:27:48.022259    4688 certs.go:199] skipping minikubeCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0706 21:27:48.036972    4688 certs.go:199] skipping proxyClientCA CA generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0706 21:27:48.052984    4688 certs.go:315] skipping minikube-user signed cert generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-815300\client.key
	I0706 21:27:48.071780    4688 certs.go:315] skipping minikube signed cert generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-815300\apiserver.key.3e5af9ab
	I0706 21:27:48.087206    4688 certs.go:315] skipping aggregator signed cert generation: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-815300\proxy-client.key
	I0706 21:27:48.091895    4688 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem (1338 bytes)
	W0706 21:27:48.098485    4688 certs.go:433] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256_empty.pem, impossibly tiny 0 bytes
	I0706 21:27:48.098485    4688 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0706 21:27:48.108158    4688 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0706 21:27:48.117307    4688 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0706 21:27:48.124648    4688 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0706 21:27:48.133525    4688 certs.go:437] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem (1708 bytes)
	I0706 21:27:48.142284    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-815300\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0706 21:27:48.181463    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-815300\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0706 21:27:48.220932    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-815300\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0706 21:27:48.267479    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\pause-815300\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0706 21:27:48.307973    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0706 21:27:48.349374    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0706 21:27:48.396933    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0706 21:27:48.435924    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0706 21:27:48.471580    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\8256.pem --> /usr/share/ca-certificates/8256.pem (1338 bytes)
	I0706 21:27:48.512091    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\82562.pem --> /usr/share/ca-certificates/82562.pem (1708 bytes)
	I0706 21:27:48.549822    4688 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0706 21:27:48.588160    4688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0706 21:27:48.628038    4688 ssh_runner.go:195] Run: openssl version
	I0706 21:27:48.646713    4688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/8256.pem && ln -fs /usr/share/ca-certificates/8256.pem /etc/ssl/certs/8256.pem"
	I0706 21:27:48.671867    4688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8256.pem
	I0706 21:27:48.677737    4688 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul  6 20:14 /usr/share/ca-certificates/8256.pem
	I0706 21:27:48.688076    4688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8256.pem
	I0706 21:27:48.707058    4688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/8256.pem /etc/ssl/certs/51391683.0"
	I0706 21:27:48.731560    4688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82562.pem && ln -fs /usr/share/ca-certificates/82562.pem /etc/ssl/certs/82562.pem"
	I0706 21:27:48.756419    4688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82562.pem
	I0706 21:27:48.762386    4688 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul  6 20:14 /usr/share/ca-certificates/82562.pem
	I0706 21:27:48.774808    4688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82562.pem
	I0706 21:27:48.795048    4688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/82562.pem /etc/ssl/certs/3ec20f2e.0"
	I0706 21:27:48.821501    4688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0706 21:27:48.850247    4688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0706 21:27:48.855542    4688 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul  6 20:05 /usr/share/ca-certificates/minikubeCA.pem
	I0706 21:27:48.865448    4688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0706 21:27:48.881925    4688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0706 21:27:48.906431    4688 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0706 21:27:48.921041    4688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0706 21:27:48.941221    4688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0706 21:27:48.957708    4688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0706 21:27:48.973273    4688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0706 21:27:48.990722    4688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0706 21:27:49.007134    4688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0706 21:27:49.014420    4688 kubeadm.go:404] StartCluster: {Name:pause-815300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:pause-815300 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:172.29.72.136 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-s
ecurity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 21:27:49.022195    4688 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0706 21:27:49.058079    4688 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0706 21:27:49.074080    4688 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0706 21:27:49.074264    4688 kubeadm.go:636] restartCluster start
	I0706 21:27:49.091655    4688 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0706 21:27:49.114745    4688 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:49.126830    4688 kubeconfig.go:92] found "pause-815300" server: "https://172.29.72.136:8443"
	I0706 21:27:49.129864    4688 kapi.go:59] client config for pause-815300: &rest.Config{Host:"https://172.29.72.136:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-815300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-815300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 21:27:49.135424    4688 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0706 21:27:49.155711    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:49.170496    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:49.189728    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:49.703758    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:49.713042    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:49.730970    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:50.195103    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:50.204539    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:50.222716    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:50.694874    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:50.704803    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:50.722931    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:51.206157    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:51.215196    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:51.225319    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:51.700585    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:51.711030    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:51.740674    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:52.194553    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:52.202142    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:52.205568    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:52.690388    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:52.700507    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:52.720470    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:53.203206    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:53.214869    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:53.285555    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:53.705152    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:53.717118    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:53.736872    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:54.195217    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:54.206696    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:54.246068    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:54.707924    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:54.718010    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:54.737002    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:55.195683    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:55.203575    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:55.218028    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:55.693934    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:55.701558    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:55.721001    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:56.200495    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:56.210891    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:56.235019    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:56.699822    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:56.708096    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:56.728929    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:57.190741    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:57.198604    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:57.216428    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:57.697603    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:57.707521    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:57.725247    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:58.200933    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:58.209019    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:58.230180    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:58.707043    4688 api_server.go:166] Checking apiserver status ...
	I0706 21:27:58.717556    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0706 21:27:58.789725    4688 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:27:59.158459    4688 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0706 21:27:59.284613    4688 kubeadm.go:1128] stopping kube-system containers ...
	I0706 21:27:59.302680    4688 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0706 21:27:59.365535    4688 docker.go:462] Stopping containers: [000c4dcb0b47 bfbf93f0c784 933f2bbb2a3f 01e4333af66b f490307bd0e1 d3414fa8c8b8 b2c8b4352a76 3b5145a8fa62 72d473bc5f4c 225da473f1a1 0461c9cafea2 ea301d243cce fa4fd0cfe65f 57e18d28cc80 8cb957db0149 34d6e18cb2ba 84800fc88637 ef9e689a728d f61062f65247 06cce869927c edaea9083e9f 0e405f7b9077 315aaf7d9e60 e4f43d24529d]
	I0706 21:27:59.374047    4688 ssh_runner.go:195] Run: docker stop 000c4dcb0b47 bfbf93f0c784 933f2bbb2a3f 01e4333af66b f490307bd0e1 d3414fa8c8b8 b2c8b4352a76 3b5145a8fa62 72d473bc5f4c 225da473f1a1 0461c9cafea2 ea301d243cce fa4fd0cfe65f 57e18d28cc80 8cb957db0149 34d6e18cb2ba 84800fc88637 ef9e689a728d f61062f65247 06cce869927c edaea9083e9f 0e405f7b9077 315aaf7d9e60 e4f43d24529d
	I0706 21:28:21.379762    4688 ssh_runner.go:235] Completed: docker stop 000c4dcb0b47 bfbf93f0c784 933f2bbb2a3f 01e4333af66b f490307bd0e1 d3414fa8c8b8 b2c8b4352a76 3b5145a8fa62 72d473bc5f4c 225da473f1a1 0461c9cafea2 ea301d243cce fa4fd0cfe65f 57e18d28cc80 8cb957db0149 34d6e18cb2ba 84800fc88637 ef9e689a728d f61062f65247 06cce869927c edaea9083e9f 0e405f7b9077 315aaf7d9e60 e4f43d24529d: (22.0055565s)
	I0706 21:28:21.390004    4688 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0706 21:28:21.472778    4688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0706 21:28:21.487780    4688 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  6 21:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul  6 21:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul  6 21:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Jul  6 21:25 /etc/kubernetes/scheduler.conf
	
	I0706 21:28:21.498465    4688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0706 21:28:21.521903    4688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0706 21:28:21.549814    4688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0706 21:28:21.583503    4688 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:28:21.594681    4688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0706 21:28:21.622897    4688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0706 21:28:21.635142    4688 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:28:21.646936    4688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0706 21:28:21.673863    4688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0706 21:28:21.699281    4688 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0706 21:28:21.699412    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:21.813074    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:23.268609    4688 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4554561s)
	I0706 21:28:23.268655    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:23.562266    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:23.667446    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:23.765129    4688 api_server.go:52] waiting for apiserver process to appear ...
	I0706 21:28:23.779487    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:24.324522    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:24.832072    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:25.336827    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:25.833484    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:26.328137    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:26.820288    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:27.328758    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:27.828514    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:28.340764    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:28.358630    4688 api_server.go:72] duration metric: took 4.5934671s to wait for apiserver process to appear ...
	I0706 21:28:28.358630    4688 api_server.go:88] waiting for apiserver healthz status ...
	I0706 21:28:28.358630    4688 api_server.go:253] Checking apiserver healthz at https://172.29.72.136:8443/healthz ...
	I0706 21:28:30.677457    4688 api_server.go:279] https://172.29.72.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0706 21:28:30.681074    4688 api_server.go:103] status: https://172.29.72.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0706 21:28:31.200158    4688 api_server.go:253] Checking apiserver healthz at https://172.29.72.136:8443/healthz ...
	I0706 21:28:31.210224    4688 api_server.go:279] https://172.29.72.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0706 21:28:31.210266    4688 api_server.go:103] status: https://172.29.72.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0706 21:28:31.685709    4688 api_server.go:253] Checking apiserver healthz at https://172.29.72.136:8443/healthz ...
	I0706 21:28:31.698921    4688 api_server.go:279] https://172.29.72.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0706 21:28:31.698990    4688 api_server.go:103] status: https://172.29.72.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0706 21:28:32.199200    4688 api_server.go:253] Checking apiserver healthz at https://172.29.72.136:8443/healthz ...
	I0706 21:28:32.210170    4688 api_server.go:279] https://172.29.72.136:8443/healthz returned 200:
	ok
	I0706 21:28:32.238355    4688 api_server.go:141] control plane version: v1.27.3
	I0706 21:28:32.238355    4688 api_server.go:131] duration metric: took 3.879698s to wait for apiserver health ...
	I0706 21:28:32.238355    4688 cni.go:84] Creating CNI manager for ""
	I0706 21:28:32.238355    4688 cni.go:152] "hyperv" driver + "docker" runtime found, recommending bridge
	I0706 21:28:32.250094    4688 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0706 21:28:32.265073    4688 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0706 21:28:32.286299    4688 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0706 21:28:32.321093    4688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0706 21:28:32.403858    4688 system_pods.go:59] 6 kube-system pods found
	I0706 21:28:32.403908    4688 system_pods.go:61] "coredns-5d78c9869d-pxd8p" [86d450a5-067f-4e41-b0ab-74b6a870077e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0706 21:28:32.403908    4688 system_pods.go:61] "etcd-pause-815300" [67ca2ac7-3615-40ff-99c2-865777e75886] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0706 21:28:32.403908    4688 system_pods.go:61] "kube-apiserver-pause-815300" [419ef7fe-1b05-49b2-89db-698a672f6f40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0706 21:28:32.403908    4688 system_pods.go:61] "kube-controller-manager-pause-815300" [4f33c53a-ea60-4baf-9bf0-d25422d4a2c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0706 21:28:32.403908    4688 system_pods.go:61] "kube-proxy-q98dz" [75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0706 21:28:32.403908    4688 system_pods.go:61] "kube-scheduler-pause-815300" [b509458d-d17a-4b0c-8b10-74aabc3b620b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0706 21:28:32.403908    4688 system_pods.go:74] duration metric: took 82.814ms to wait for pod list to return data ...
	I0706 21:28:32.403908    4688 node_conditions.go:102] verifying NodePressure condition ...
	I0706 21:28:32.409748    4688 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 21:28:32.409748    4688 node_conditions.go:123] node cpu capacity is 2
	I0706 21:28:32.409748    4688 node_conditions.go:105] duration metric: took 5.8398ms to run NodePressure ...
	I0706 21:28:32.409748    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:32.861577    4688 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0706 21:28:32.868326    4688 kubeadm.go:787] kubelet initialised
	I0706 21:28:32.868418    4688 kubeadm.go:788] duration metric: took 6.7478ms waiting for restarted kubelet to initialise ...
	I0706 21:28:32.868455    4688 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 21:28:32.879130    4688 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:34.903597    4688 pod_ready.go:102] pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace has status "Ready":"False"
	I0706 21:28:35.421101    4688 pod_ready.go:92] pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:35.421101    4688 pod_ready.go:81] duration metric: took 2.5419528s waiting for pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:35.421101    4688 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:37.455368    4688 pod_ready.go:102] pod "etcd-pause-815300" in "kube-system" namespace has status "Ready":"False"
	I0706 21:28:39.463346    4688 pod_ready.go:102] pod "etcd-pause-815300" in "kube-system" namespace has status "Ready":"False"
	I0706 21:28:41.951383    4688 pod_ready.go:102] pod "etcd-pause-815300" in "kube-system" namespace has status "Ready":"False"
	I0706 21:28:43.486893    4688 pod_ready.go:92] pod "etcd-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:43.486951    4688 pod_ready.go:81] duration metric: took 8.0657919s waiting for pod "etcd-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:43.486951    4688 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.517495    4688 pod_ready.go:92] pod "kube-apiserver-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:45.517495    4688 pod_ready.go:81] duration metric: took 2.0305296s waiting for pod "kube-apiserver-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.517495    4688 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.526185    4688 pod_ready.go:92] pod "kube-controller-manager-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:45.526185    4688 pod_ready.go:81] duration metric: took 8.6901ms waiting for pod "kube-controller-manager-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.526185    4688 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q98dz" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.534542    4688 pod_ready.go:92] pod "kube-proxy-q98dz" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:45.534596    4688 pod_ready.go:81] duration metric: took 8.4105ms waiting for pod "kube-proxy-q98dz" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.534596    4688 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.541790    4688 pod_ready.go:92] pod "kube-scheduler-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:45.542038    4688 pod_ready.go:81] duration metric: took 7.4426ms waiting for pod "kube-scheduler-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.542105    4688 pod_ready.go:38] duration metric: took 12.6735589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 21:28:45.542105    4688 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0706 21:28:45.556187    4688 ops.go:34] apiserver oom_adj: -16
	I0706 21:28:45.556245    4688 kubeadm.go:640] restartCluster took 56.4815062s
	I0706 21:28:45.556245    4688 kubeadm.go:406] StartCluster complete in 56.5414183s
	I0706 21:28:45.556343    4688 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 21:28:45.556713    4688 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:28:45.558098    4688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 21:28:45.559912    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0706 21:28:45.559912    4688 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0706 21:28:45.565659    4688 out.go:177] * Enabled addons: 
	I0706 21:28:45.560750    4688 config.go:182] Loaded profile config "pause-815300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:28:45.567180    4688 addons.go:499] enable addons completed in 7.2688ms: enabled=[]
	I0706 21:28:45.575057    4688 kapi.go:59] client config for pause-815300: &rest.Config{Host:"https://172.29.72.136:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-815300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-815300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 21:28:45.582379    4688 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-815300" context rescaled to 1 replicas
	I0706 21:28:45.582444    4688 start.go:223] Will wait 6m0s for node &{Name: IP:172.29.72.136 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 21:28:45.587251    4688 out.go:177] * Verifying Kubernetes components...
	I0706 21:28:45.597929    4688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 21:28:45.707639    4688 node_ready.go:35] waiting up to 6m0s for node "pause-815300" to be "Ready" ...
	I0706 21:28:45.707957    4688 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0706 21:28:45.712657    4688 node_ready.go:49] node "pause-815300" has status "Ready":"True"
	I0706 21:28:45.712751    4688 node_ready.go:38] duration metric: took 4.8971ms waiting for node "pause-815300" to be "Ready" ...
	I0706 21:28:45.712751    4688 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 21:28:45.721054    4688 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.928472    4688 pod_ready.go:92] pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:45.928472    4688 pod_ready.go:81] duration metric: took 207.4172ms waiting for pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.928472    4688 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:46.315507    4688 pod_ready.go:92] pod "etcd-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:46.315556    4688 pod_ready.go:81] duration metric: took 387.0808ms waiting for pod "etcd-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:46.315556    4688 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:46.719096    4688 pod_ready.go:92] pod "kube-apiserver-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:46.719096    4688 pod_ready.go:81] duration metric: took 403.5373ms waiting for pod "kube-apiserver-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:46.719096    4688 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.112182    4688 pod_ready.go:92] pod "kube-controller-manager-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:47.112182    4688 pod_ready.go:81] duration metric: took 393.0829ms waiting for pod "kube-controller-manager-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.112252    4688 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q98dz" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.521795    4688 pod_ready.go:92] pod "kube-proxy-q98dz" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:47.521861    4688 pod_ready.go:81] duration metric: took 409.6054ms waiting for pod "kube-proxy-q98dz" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.521861    4688 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.910585    4688 pod_ready.go:92] pod "kube-scheduler-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:47.911129    4688 pod_ready.go:81] duration metric: took 389.2656ms waiting for pod "kube-scheduler-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.911172    4688 pod_ready.go:38] duration metric: took 2.1983459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 21:28:47.911211    4688 api_server.go:52] waiting for apiserver process to appear ...
	I0706 21:28:47.924012    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:47.951453    4688 api_server.go:72] duration metric: took 2.3689304s to wait for apiserver process to appear ...
	I0706 21:28:47.951529    4688 api_server.go:88] waiting for apiserver healthz status ...
	I0706 21:28:47.951529    4688 api_server.go:253] Checking apiserver healthz at https://172.29.72.136:8443/healthz ...
	I0706 21:28:47.961811    4688 api_server.go:279] https://172.29.72.136:8443/healthz returned 200:
	ok
	I0706 21:28:47.963889    4688 api_server.go:141] control plane version: v1.27.3
	I0706 21:28:47.963889    4688 api_server.go:131] duration metric: took 12.3597ms to wait for apiserver health ...
	I0706 21:28:47.963889    4688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0706 21:28:48.127886    4688 system_pods.go:59] 6 kube-system pods found
	I0706 21:28:48.127886    4688 system_pods.go:61] "coredns-5d78c9869d-pxd8p" [86d450a5-067f-4e41-b0ab-74b6a870077e] Running
	I0706 21:28:48.127886    4688 system_pods.go:61] "etcd-pause-815300" [67ca2ac7-3615-40ff-99c2-865777e75886] Running
	I0706 21:28:48.127886    4688 system_pods.go:61] "kube-apiserver-pause-815300" [419ef7fe-1b05-49b2-89db-698a672f6f40] Running
	I0706 21:28:48.127886    4688 system_pods.go:61] "kube-controller-manager-pause-815300" [4f33c53a-ea60-4baf-9bf0-d25422d4a2c1] Running
	I0706 21:28:48.127886    4688 system_pods.go:61] "kube-proxy-q98dz" [75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5] Running
	I0706 21:28:48.127886    4688 system_pods.go:61] "kube-scheduler-pause-815300" [b509458d-d17a-4b0c-8b10-74aabc3b620b] Running
	I0706 21:28:48.127886    4688 system_pods.go:74] duration metric: took 163.9962ms to wait for pod list to return data ...
	I0706 21:28:48.127886    4688 default_sa.go:34] waiting for default service account to be created ...
	I0706 21:28:48.313592    4688 default_sa.go:45] found service account: "default"
	I0706 21:28:48.313592    4688 default_sa.go:55] duration metric: took 185.7043ms for default service account to be created ...
	I0706 21:28:48.313592    4688 system_pods.go:116] waiting for k8s-apps to be running ...
	I0706 21:28:48.528196    4688 system_pods.go:86] 6 kube-system pods found
	I0706 21:28:48.528196    4688 system_pods.go:89] "coredns-5d78c9869d-pxd8p" [86d450a5-067f-4e41-b0ab-74b6a870077e] Running
	I0706 21:28:48.528196    4688 system_pods.go:89] "etcd-pause-815300" [67ca2ac7-3615-40ff-99c2-865777e75886] Running
	I0706 21:28:48.528196    4688 system_pods.go:89] "kube-apiserver-pause-815300" [419ef7fe-1b05-49b2-89db-698a672f6f40] Running
	I0706 21:28:48.528196    4688 system_pods.go:89] "kube-controller-manager-pause-815300" [4f33c53a-ea60-4baf-9bf0-d25422d4a2c1] Running
	I0706 21:28:48.528196    4688 system_pods.go:89] "kube-proxy-q98dz" [75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5] Running
	I0706 21:28:48.528196    4688 system_pods.go:89] "kube-scheduler-pause-815300" [b509458d-d17a-4b0c-8b10-74aabc3b620b] Running
	I0706 21:28:48.528196    4688 system_pods.go:126] duration metric: took 214.6034ms to wait for k8s-apps to be running ...
	I0706 21:28:48.528196    4688 system_svc.go:44] waiting for kubelet service to be running ....
	I0706 21:28:48.540056    4688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 21:28:48.560129    4688 system_svc.go:56] duration metric: took 31.9327ms WaitForService to wait for kubelet.
	I0706 21:28:48.560129    4688 kubeadm.go:581] duration metric: took 2.9776023s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0706 21:28:48.560129    4688 node_conditions.go:102] verifying NodePressure condition ...
	I0706 21:28:48.722300    4688 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 21:28:48.722411    4688 node_conditions.go:123] node cpu capacity is 2
	I0706 21:28:48.722411    4688 node_conditions.go:105] duration metric: took 162.2808ms to run NodePressure ...
	I0706 21:28:48.722468    4688 start.go:228] waiting for startup goroutines ...
	I0706 21:28:48.722468    4688 start.go:233] waiting for cluster config update ...
	I0706 21:28:48.722468    4688 start.go:242] writing updated cluster config ...
	I0706 21:28:48.737148    4688 ssh_runner.go:195] Run: rm -f paused
	I0706 21:28:48.956417    4688 start.go:642] kubectl: 1.18.2, cluster: 1.27.3 (minor skew: 9)
	I0706 21:28:48.958673    4688 out.go:177] 
	W0706 21:28:48.961626    4688 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.3.
	! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.3.
	I0706 21:28:48.966597    4688 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0706 21:28:48.969434    4688 out.go:177] * Done! kubectl is now configured to use "pause-815300" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-815300 -n pause-815300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-815300 -n pause-815300: (5.0200787s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-815300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-815300 logs -n 25: (3.934554s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl status cri-docker                          |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo cat                            | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo cat                            | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | cri-dockerd --version                                |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl status containerd                          |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl cat containerd                             |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo cat                            | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo cat                            | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | containerd config dump                               |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl status crio --all                          |                           |                   |         |                     |                     |
	|         | --full --no-pager                                    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo find                           | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo crio                           | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | config                                               |                           |                   |         |                     |                     |
	| delete  | -p cilium-852700                                     | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC | 06 Jul 23 21:24 UTC |
	| start   | -p kubernetes-upgrade-990200                         | kubernetes-upgrade-990200 | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC | 06 Jul 23 21:27 UTC |
	|         | --memory=2200                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| ssh     | cert-options-864500 ssh                              | cert-options-864500       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC | 06 Jul 23 21:24 UTC |
	|         | openssl x509 -text -noout -in                        |                           |                   |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |                   |         |                     |                     |
	| ssh     | -p cert-options-864500 -- sudo                       | cert-options-864500       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC | 06 Jul 23 21:25 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |                   |         |                     |                     |
	| delete  | -p cert-options-864500                               | cert-options-864500       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:25 UTC | 06 Jul 23 21:25 UTC |
	| start   | -p cert-expiration-861000                            | cert-expiration-861000    | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:26 UTC | 06 Jul 23 21:27 UTC |
	|         | --memory=2048                                        |                           |                   |         |                     |                     |
	|         | --cert-expiration=8760h                              |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| start   | -p pause-815300                                      | pause-815300              | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:27 UTC | 06 Jul 23 21:28 UTC |
	|         | --alsologtostderr -v=1                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| delete  | -p cert-expiration-861000                            | cert-expiration-861000    | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:27 UTC | 06 Jul 23 21:28 UTC |
	| stop    | -p kubernetes-upgrade-990200                         | kubernetes-upgrade-990200 | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:27 UTC | 06 Jul 23 21:28 UTC |
	| start   | -p kubernetes-upgrade-990200                         | kubernetes-upgrade-990200 | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:28 UTC |                     |
	|         | --memory=2200                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| start   | -p force-systemd-env-807400                          | force-systemd-env-807400  | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:28 UTC |                     |
	|         | --memory=2048                                        |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 21:28:22
	Running on machine: minikube6
	Binary: Built with gc go1.20.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 21:28:21.988969    3852 out.go:296] Setting OutFile to fd 1420 ...
	I0706 21:28:22.054838    3852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:28:22.054838    3852 out.go:309] Setting ErrFile to fd 1544...
	I0706 21:28:22.054838    3852 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:28:22.070694    3852 out.go:303] Setting JSON to false
	I0706 21:28:22.080168    3852 start.go:127] hostinfo: {"hostname":"minikube6","uptime":497038,"bootTime":1688181863,"procs":154,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 21:28:22.080168    3852 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 21:28:22.088255    3852 out.go:177] * [force-systemd-env-807400] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 21:28:22.092711    3852 notify.go:220] Checking for updates...
	I0706 21:28:22.096732    3852 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:28:22.099352    3852 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 21:28:22.101158    3852 out.go:177]   - MINIKUBE_LOCATION=16832
	I0706 21:28:22.104211    3852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 21:28:22.105666    3852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0706 21:28:22.109889    3852 config.go:182] Loaded profile config "kubernetes-upgrade-990200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:28:22.120193    3852 config.go:182] Loaded profile config "pause-815300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:28:22.121035    3852 config.go:182] Loaded profile config "stopped-upgrade-322600": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0706 21:28:22.127595    3852 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 21:28:23.781845    3852 out.go:177] * Using the hyperv driver based on user configuration
	I0706 21:28:23.866661    3852 start.go:297] selected driver: hyperv
	I0706 21:28:23.866788    3852 start.go:944] validating driver "hyperv" against <nil>
	I0706 21:28:23.866867    3852 start.go:955] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 21:28:23.916761    3852 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 21:28:23.920598    3852 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0706 21:28:23.920666    3852 cni.go:84] Creating CNI manager for ""
	I0706 21:28:23.920666    3852 cni.go:152] "hyperv" driver + "docker" runtime found, recommending bridge
	I0706 21:28:23.920724    3852 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0706 21:28:23.920724    3852 start_flags.go:319] config:
	{Name:force-systemd-env-807400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-807400 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 21:28:23.921189    3852 iso.go:125] acquiring lock: {Name:mk7608081596fbafb2a8a8984d26768bdf174467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:28:24.010599    3852 out.go:177] * Starting control plane node force-systemd-env-807400 in cluster force-systemd-env-807400
	I0706 21:28:21.379762    4688 ssh_runner.go:235] Completed: docker stop 000c4dcb0b47 bfbf93f0c784 933f2bbb2a3f 01e4333af66b f490307bd0e1 d3414fa8c8b8 b2c8b4352a76 3b5145a8fa62 72d473bc5f4c 225da473f1a1 0461c9cafea2 ea301d243cce fa4fd0cfe65f 57e18d28cc80 8cb957db0149 34d6e18cb2ba 84800fc88637 ef9e689a728d f61062f65247 06cce869927c edaea9083e9f 0e405f7b9077 315aaf7d9e60 e4f43d24529d: (22.0055565s)
	I0706 21:28:21.390004    4688 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0706 21:28:21.472778    4688 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0706 21:28:21.487780    4688 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul  6 21:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jul  6 21:25 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Jul  6 21:26 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Jul  6 21:25 /etc/kubernetes/scheduler.conf
	
	I0706 21:28:21.498465    4688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0706 21:28:21.521903    4688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0706 21:28:21.549814    4688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0706 21:28:21.583503    4688 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:28:21.594681    4688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0706 21:28:21.622897    4688 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0706 21:28:21.635142    4688 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0706 21:28:21.646936    4688 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0706 21:28:21.673863    4688 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0706 21:28:21.699281    4688 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0706 21:28:21.699412    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:21.813074    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:23.268609    4688 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4554561s)
	I0706 21:28:23.268655    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:23.562266    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:23.667446    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:23.765129    4688 api_server.go:52] waiting for apiserver process to appear ...
	I0706 21:28:23.779487    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:24.167865    3852 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 21:28:24.176035    3852 preload.go:148] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0706 21:28:24.176177    3852 cache.go:57] Caching tarball of preloaded images
	I0706 21:28:24.176508    3852 preload.go:174] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0706 21:28:24.176694    3852 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0706 21:28:24.176974    3852 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\force-systemd-env-807400\config.json ...
	I0706 21:28:24.176974    3852 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\force-systemd-env-807400\config.json: {Name:mkbf4448f8a0b47cf0d7ae5216b63cd7828c2d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 21:28:24.178377    3852 start.go:365] acquiring machines lock for force-systemd-env-807400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0706 21:28:24.324522    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:24.832072    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:25.336827    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:25.833484    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:26.328137    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:26.820288    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:27.328758    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:27.828514    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:28.340764    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:28.358630    4688 api_server.go:72] duration metric: took 4.5934671s to wait for apiserver process to appear ...
	I0706 21:28:28.358630    4688 api_server.go:88] waiting for apiserver healthz status ...
	I0706 21:28:28.358630    4688 api_server.go:253] Checking apiserver healthz at https://172.29.72.136:8443/healthz ...
	I0706 21:28:26.212977   11584 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:28:26.212977   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:26.212977   11584 main.go:141] libmachine: Waiting for host to start...
	I0706 21:28:26.212977   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:28:26.940014   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:28:26.940014   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:26.940014   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:28:27.961439   11584 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:28:27.961497   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:28.963257   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:28:29.673092   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:28:29.673092   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:29.673092   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:28:30.651888   11584 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:28:30.651987   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:30.677457    4688 api_server.go:279] https://172.29.72.136:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0706 21:28:30.681074    4688 api_server.go:103] status: https://172.29.72.136:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0706 21:28:31.200158    4688 api_server.go:253] Checking apiserver healthz at https://172.29.72.136:8443/healthz ...
	I0706 21:28:31.210224    4688 api_server.go:279] https://172.29.72.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0706 21:28:31.210266    4688 api_server.go:103] status: https://172.29.72.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0706 21:28:31.685709    4688 api_server.go:253] Checking apiserver healthz at https://172.29.72.136:8443/healthz ...
	I0706 21:28:31.698921    4688 api_server.go:279] https://172.29.72.136:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0706 21:28:31.698990    4688 api_server.go:103] status: https://172.29.72.136:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0706 21:28:32.199200    4688 api_server.go:253] Checking apiserver healthz at https://172.29.72.136:8443/healthz ...
	I0706 21:28:32.210170    4688 api_server.go:279] https://172.29.72.136:8443/healthz returned 200:
	ok
	I0706 21:28:32.238355    4688 api_server.go:141] control plane version: v1.27.3
	I0706 21:28:32.238355    4688 api_server.go:131] duration metric: took 3.879698s to wait for apiserver health ...
	I0706 21:28:32.238355    4688 cni.go:84] Creating CNI manager for ""
	I0706 21:28:32.238355    4688 cni.go:152] "hyperv" driver + "docker" runtime found, recommending bridge
	I0706 21:28:32.250094    4688 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0706 21:28:32.265073    4688 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0706 21:28:32.286299    4688 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0706 21:28:32.321093    4688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0706 21:28:32.403858    4688 system_pods.go:59] 6 kube-system pods found
	I0706 21:28:32.403908    4688 system_pods.go:61] "coredns-5d78c9869d-pxd8p" [86d450a5-067f-4e41-b0ab-74b6a870077e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0706 21:28:32.403908    4688 system_pods.go:61] "etcd-pause-815300" [67ca2ac7-3615-40ff-99c2-865777e75886] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0706 21:28:32.403908    4688 system_pods.go:61] "kube-apiserver-pause-815300" [419ef7fe-1b05-49b2-89db-698a672f6f40] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0706 21:28:32.403908    4688 system_pods.go:61] "kube-controller-manager-pause-815300" [4f33c53a-ea60-4baf-9bf0-d25422d4a2c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0706 21:28:32.403908    4688 system_pods.go:61] "kube-proxy-q98dz" [75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0706 21:28:32.403908    4688 system_pods.go:61] "kube-scheduler-pause-815300" [b509458d-d17a-4b0c-8b10-74aabc3b620b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0706 21:28:32.403908    4688 system_pods.go:74] duration metric: took 82.814ms to wait for pod list to return data ...
	I0706 21:28:32.403908    4688 node_conditions.go:102] verifying NodePressure condition ...
	I0706 21:28:32.409748    4688 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 21:28:32.409748    4688 node_conditions.go:123] node cpu capacity is 2
	I0706 21:28:32.409748    4688 node_conditions.go:105] duration metric: took 5.8398ms to run NodePressure ...
	I0706 21:28:32.409748    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0706 21:28:32.861577    4688 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0706 21:28:32.868326    4688 kubeadm.go:787] kubelet initialised
	I0706 21:28:32.868418    4688 kubeadm.go:788] duration metric: took 6.7478ms waiting for restarted kubelet to initialise ...
	I0706 21:28:32.868455    4688 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 21:28:32.879130    4688 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:31.667492   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:28:32.460998   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:28:32.460998   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:32.460998   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:28:33.460346   11584 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:28:33.460346   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:34.467764   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:28:35.168511   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:28:35.168706   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:35.168776   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:28:34.903597    4688 pod_ready.go:102] pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace has status "Ready":"False"
	I0706 21:28:35.421101    4688 pod_ready.go:92] pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:35.421101    4688 pod_ready.go:81] duration metric: took 2.5419528s waiting for pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:35.421101    4688 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:37.455368    4688 pod_ready.go:102] pod "etcd-pause-815300" in "kube-system" namespace has status "Ready":"False"
	I0706 21:28:36.159014   11584 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:28:36.159358   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:37.160392   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:28:37.864685   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:28:37.864853   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:37.864921   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:28:38.858148   11584 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:28:38.858256   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:39.865689   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:28:40.564842   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:28:40.564955   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:40.565008   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:28:39.463346    4688 pod_ready.go:102] pod "etcd-pause-815300" in "kube-system" namespace has status "Ready":"False"
	I0706 21:28:41.951383    4688 pod_ready.go:102] pod "etcd-pause-815300" in "kube-system" namespace has status "Ready":"False"
	I0706 21:28:43.486893    4688 pod_ready.go:92] pod "etcd-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:43.486951    4688 pod_ready.go:81] duration metric: took 8.0657919s waiting for pod "etcd-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:43.486951    4688 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.517495    4688 pod_ready.go:92] pod "kube-apiserver-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:45.517495    4688 pod_ready.go:81] duration metric: took 2.0305296s waiting for pod "kube-apiserver-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.517495    4688 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.526185    4688 pod_ready.go:92] pod "kube-controller-manager-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:45.526185    4688 pod_ready.go:81] duration metric: took 8.6901ms waiting for pod "kube-controller-manager-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.526185    4688 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-q98dz" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.534542    4688 pod_ready.go:92] pod "kube-proxy-q98dz" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:45.534596    4688 pod_ready.go:81] duration metric: took 8.4105ms waiting for pod "kube-proxy-q98dz" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.534596    4688 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.541790    4688 pod_ready.go:92] pod "kube-scheduler-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:45.542038    4688 pod_ready.go:81] duration metric: took 7.4426ms waiting for pod "kube-scheduler-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.542105    4688 pod_ready.go:38] duration metric: took 12.6735589s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 21:28:45.542105    4688 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0706 21:28:45.556187    4688 ops.go:34] apiserver oom_adj: -16
	I0706 21:28:45.556245    4688 kubeadm.go:640] restartCluster took 56.4815062s
	I0706 21:28:45.556245    4688 kubeadm.go:406] StartCluster complete in 56.5414183s
	I0706 21:28:45.556343    4688 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 21:28:45.556713    4688 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:28:45.558098    4688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 21:28:45.559912    4688 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0706 21:28:45.559912    4688 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0706 21:28:45.565659    4688 out.go:177] * Enabled addons: 
	I0706 21:28:45.560750    4688 config.go:182] Loaded profile config "pause-815300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:28:45.567180    4688 addons.go:499] enable addons completed in 7.2688ms: enabled=[]
	I0706 21:28:45.575057    4688 kapi.go:59] client config for pause-815300: &rest.Config{Host:"https://172.29.72.136:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-815300\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\pause-815300\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x16f4180), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0706 21:28:45.582379    4688 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-815300" context rescaled to 1 replicas
	I0706 21:28:45.582444    4688 start.go:223] Will wait 6m0s for node &{Name: IP:172.29.72.136 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0706 21:28:45.587251    4688 out.go:177] * Verifying Kubernetes components...
	I0706 21:28:41.551108   11584 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:28:41.551163   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:42.560148   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:28:43.269166   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:28:43.269399   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:43.269399   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:28:44.282001   11584 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:28:44.282084   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:45.286870   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:28:45.597929    4688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 21:28:45.707639    4688 node_ready.go:35] waiting up to 6m0s for node "pause-815300" to be "Ready" ...
	I0706 21:28:45.707957    4688 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0706 21:28:45.712657    4688 node_ready.go:49] node "pause-815300" has status "Ready":"True"
	I0706 21:28:45.712751    4688 node_ready.go:38] duration metric: took 4.8971ms waiting for node "pause-815300" to be "Ready" ...
	I0706 21:28:45.712751    4688 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 21:28:45.721054    4688 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.928472    4688 pod_ready.go:92] pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:45.928472    4688 pod_ready.go:81] duration metric: took 207.4172ms waiting for pod "coredns-5d78c9869d-pxd8p" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:45.928472    4688 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:46.315507    4688 pod_ready.go:92] pod "etcd-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:46.315556    4688 pod_ready.go:81] duration metric: took 387.0808ms waiting for pod "etcd-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:46.315556    4688 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:46.719096    4688 pod_ready.go:92] pod "kube-apiserver-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:46.719096    4688 pod_ready.go:81] duration metric: took 403.5373ms waiting for pod "kube-apiserver-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:46.719096    4688 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.112182    4688 pod_ready.go:92] pod "kube-controller-manager-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:47.112182    4688 pod_ready.go:81] duration metric: took 393.0829ms waiting for pod "kube-controller-manager-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.112252    4688 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q98dz" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.521795    4688 pod_ready.go:92] pod "kube-proxy-q98dz" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:47.521861    4688 pod_ready.go:81] duration metric: took 409.6054ms waiting for pod "kube-proxy-q98dz" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.521861    4688 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.910585    4688 pod_ready.go:92] pod "kube-scheduler-pause-815300" in "kube-system" namespace has status "Ready":"True"
	I0706 21:28:47.911129    4688 pod_ready.go:81] duration metric: took 389.2656ms waiting for pod "kube-scheduler-pause-815300" in "kube-system" namespace to be "Ready" ...
	I0706 21:28:47.911172    4688 pod_ready.go:38] duration metric: took 2.1983459s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0706 21:28:47.911211    4688 api_server.go:52] waiting for apiserver process to appear ...
	I0706 21:28:47.924012    4688 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 21:28:47.951453    4688 api_server.go:72] duration metric: took 2.3689304s to wait for apiserver process to appear ...
	I0706 21:28:47.951529    4688 api_server.go:88] waiting for apiserver healthz status ...
	I0706 21:28:47.951529    4688 api_server.go:253] Checking apiserver healthz at https://172.29.72.136:8443/healthz ...
	I0706 21:28:47.961811    4688 api_server.go:279] https://172.29.72.136:8443/healthz returned 200:
	ok
	I0706 21:28:47.963889    4688 api_server.go:141] control plane version: v1.27.3
	I0706 21:28:47.963889    4688 api_server.go:131] duration metric: took 12.3597ms to wait for apiserver health ...
	I0706 21:28:47.963889    4688 system_pods.go:43] waiting for kube-system pods to appear ...
	I0706 21:28:48.127886    4688 system_pods.go:59] 6 kube-system pods found
	I0706 21:28:48.127886    4688 system_pods.go:61] "coredns-5d78c9869d-pxd8p" [86d450a5-067f-4e41-b0ab-74b6a870077e] Running
	I0706 21:28:48.127886    4688 system_pods.go:61] "etcd-pause-815300" [67ca2ac7-3615-40ff-99c2-865777e75886] Running
	I0706 21:28:48.127886    4688 system_pods.go:61] "kube-apiserver-pause-815300" [419ef7fe-1b05-49b2-89db-698a672f6f40] Running
	I0706 21:28:48.127886    4688 system_pods.go:61] "kube-controller-manager-pause-815300" [4f33c53a-ea60-4baf-9bf0-d25422d4a2c1] Running
	I0706 21:28:48.127886    4688 system_pods.go:61] "kube-proxy-q98dz" [75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5] Running
	I0706 21:28:48.127886    4688 system_pods.go:61] "kube-scheduler-pause-815300" [b509458d-d17a-4b0c-8b10-74aabc3b620b] Running
	I0706 21:28:48.127886    4688 system_pods.go:74] duration metric: took 163.9962ms to wait for pod list to return data ...
	I0706 21:28:48.127886    4688 default_sa.go:34] waiting for default service account to be created ...
	I0706 21:28:48.313592    4688 default_sa.go:45] found service account: "default"
	I0706 21:28:48.313592    4688 default_sa.go:55] duration metric: took 185.7043ms for default service account to be created ...
	I0706 21:28:48.313592    4688 system_pods.go:116] waiting for k8s-apps to be running ...
	I0706 21:28:48.528196    4688 system_pods.go:86] 6 kube-system pods found
	I0706 21:28:48.528196    4688 system_pods.go:89] "coredns-5d78c9869d-pxd8p" [86d450a5-067f-4e41-b0ab-74b6a870077e] Running
	I0706 21:28:48.528196    4688 system_pods.go:89] "etcd-pause-815300" [67ca2ac7-3615-40ff-99c2-865777e75886] Running
	I0706 21:28:48.528196    4688 system_pods.go:89] "kube-apiserver-pause-815300" [419ef7fe-1b05-49b2-89db-698a672f6f40] Running
	I0706 21:28:48.528196    4688 system_pods.go:89] "kube-controller-manager-pause-815300" [4f33c53a-ea60-4baf-9bf0-d25422d4a2c1] Running
	I0706 21:28:48.528196    4688 system_pods.go:89] "kube-proxy-q98dz" [75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5] Running
	I0706 21:28:48.528196    4688 system_pods.go:89] "kube-scheduler-pause-815300" [b509458d-d17a-4b0c-8b10-74aabc3b620b] Running
	I0706 21:28:48.528196    4688 system_pods.go:126] duration metric: took 214.6034ms to wait for k8s-apps to be running ...
	I0706 21:28:48.528196    4688 system_svc.go:44] waiting for kubelet service to be running ....
	I0706 21:28:48.540056    4688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 21:28:48.560129    4688 system_svc.go:56] duration metric: took 31.9327ms WaitForService to wait for kubelet.
	I0706 21:28:48.560129    4688 kubeadm.go:581] duration metric: took 2.9776023s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0706 21:28:48.560129    4688 node_conditions.go:102] verifying NodePressure condition ...
	I0706 21:28:48.722300    4688 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0706 21:28:48.722411    4688 node_conditions.go:123] node cpu capacity is 2
	I0706 21:28:48.722411    4688 node_conditions.go:105] duration metric: took 162.2808ms to run NodePressure ...
	I0706 21:28:48.722468    4688 start.go:228] waiting for startup goroutines ...
	I0706 21:28:48.722468    4688 start.go:233] waiting for cluster config update ...
	I0706 21:28:48.722468    4688 start.go:242] writing updated cluster config ...
	I0706 21:28:48.737148    4688 ssh_runner.go:195] Run: rm -f paused
	I0706 21:28:48.956417    4688 start.go:642] kubectl: 1.18.2, cluster: 1.27.3 (minor skew: 9)
	I0706 21:28:48.958673    4688 out.go:177] 
	W0706 21:28:48.961626    4688 out.go:239] ! C:\ProgramData\chocolatey\bin\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.3.
	I0706 21:28:48.966597    4688 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0706 21:28:48.969434    4688 out.go:177] * Done! kubectl is now configured to use "pause-815300" cluster and "default" namespace by default
	I0706 21:28:46.058432   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:28:46.058503   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:46.058577   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:28:47.065068   11584 main.go:141] libmachine: [stdout =====>] : 
	I0706 21:28:47.065068   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:48.072954   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:28:48.833648   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:28:48.833812   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:48.833872   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:28:50.231582   11584 main.go:141] libmachine: [stdout =====>] : 172.29.72.193
	
	I0706 21:28:50.231767   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:28:50.234763   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-07-06 21:24:49 UTC, ends at Thu 2023-07-06 21:28:57 UTC. --
	Jul 06 21:28:27 pause-815300 cri-dockerd[5157]: time="2023-07-06T21:28:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf2028b209d6630fb1bee5062afc569ba6b6e15ef0ca75af15d552cf9dd41608/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 21:28:27 pause-815300 cri-dockerd[5157]: time="2023-07-06T21:28:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5d78c9869d-pxd8p_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b2c8b4352a760b2606c8414590efb2b2159345c8ce2c7757ab24320846d455f0\""
	Jul 06 21:28:27 pause-815300 dockerd[4885]: time="2023-07-06T21:28:27.788162161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:28:27 pause-815300 dockerd[4885]: time="2023-07-06T21:28:27.788488769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:27 pause-815300 dockerd[4885]: time="2023-07-06T21:28:27.788592571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:28:27 pause-815300 dockerd[4885]: time="2023-07-06T21:28:27.788660573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:30 pause-815300 cri-dockerd[5157]: time="2023-07-06T21:28:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.500264675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.502616827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.502825432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.503313443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.509742087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.510073694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.510241198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.510410102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:32 pause-815300 cri-dockerd[5157]: time="2023-07-06T21:28:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/735bf401835f44dba8df946338e2cf4fd6d1df227dea807ba5b79d2225d9143f/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 21:28:33 pause-815300 cri-dockerd[5157]: time="2023-07-06T21:28:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8aad1162fe0ff6dafc3ae50ca6cafd51f3faadf8b5426ac576a388308bdb4a49/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.457170468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.457835683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.458007587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.458159590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.477959122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.478876242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.479073746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.479293251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	e1248db6576a7       5780543258cf0       23 seconds ago       Running             kube-proxy                2                   735bf401835f4
	06e5b783e8cac       ead0a4a53df89       23 seconds ago       Running             coredns                   2                   8aad1162fe0ff
	7d127d822e85b       08a0c939e61b7       30 seconds ago       Running             kube-apiserver            2                   bf2028b209d66
	0bc506f3a0621       7cffc01dba0e1       33 seconds ago       Running             kube-controller-manager   2                   b84b8fb1e31e3
	448d69e9bc929       86b6af7dd652c       33 seconds ago       Running             etcd                      2                   5313dba228dd8
	03848ffed74fd       41697ceeb70b3       33 seconds ago       Running             kube-scheduler            2                   b5c33f1621acc
	000c4dcb0b474       5780543258cf0       About a minute ago   Exited              kube-proxy                1                   225da473f1a17
	bfbf93f0c7843       41697ceeb70b3       About a minute ago   Exited              kube-scheduler            1                   3b5145a8fa620
	933f2bbb2a3f2       ead0a4a53df89       About a minute ago   Exited              coredns                   1                   b2c8b4352a760
	01e4333af66b9       86b6af7dd652c       About a minute ago   Exited              etcd                      1                   ea301d243ccef
	f490307bd0e1d       08a0c939e61b7       About a minute ago   Exited              kube-apiserver            1                   72d473bc5f4ce
	d3414fa8c8b8c       7cffc01dba0e1       About a minute ago   Exited              kube-controller-manager   1                   0461c9cafea21
	
	* 
	* ==> coredns [06e5b783e8ca] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 1008319858cd8849366c0f5555156ab5ce20cd98fedc211c6675234f8e435bfd28cd4ed3ec9afaafbad6dd8b85ab8681d4da6cc55eede0ec805bf7bd7719a5c3
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59553 - 19216 "HINFO IN 6553101540797816629.2077856578413489308. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060810326s
	
	* 
	* ==> coredns [933f2bbb2a3f] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1008319858cd8849366c0f5555156ab5ce20cd98fedc211c6675234f8e435bfd28cd4ed3ec9afaafbad6dd8b85ab8681d4da6cc55eede0ec805bf7bd7719a5c3
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:57191 - 3870 "HINFO IN 8961642224020709284.3531978192126873151. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042395298s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-815300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-815300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d384f293eb4d1ae13e8a16440afa4ec48ef3148
	                    minikube.k8s.io/name=pause-815300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_06T21_26_08_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 06 Jul 2023 21:26:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-815300
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 06 Jul 2023 21:28:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 06 Jul 2023 21:28:30 +0000   Thu, 06 Jul 2023 21:26:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 06 Jul 2023 21:28:30 +0000   Thu, 06 Jul 2023 21:26:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 06 Jul 2023 21:28:30 +0000   Thu, 06 Jul 2023 21:26:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 06 Jul 2023 21:28:30 +0000   Thu, 06 Jul 2023 21:26:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.72.136
	  Hostname:    pause-815300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	System Info:
	  Machine ID:                 14f47a14c69241d6960705f28555cb66
	  System UUID:                7f134c44-7889-2847-a01d-aae5dbf5d25d
	  Boot ID:                    e5481c21-c2ed-4a33-a9ab-b182c079d05b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-pxd8p                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m37s
	  kube-system                 etcd-pause-815300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m49s
	  kube-system                 kube-apiserver-pause-815300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 kube-controller-manager-pause-815300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	  kube-system                 kube-proxy-q98dz                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-scheduler-pause-815300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m33s              kube-proxy       
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 3m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m (x8 over 3m)    kubelet          Node pause-815300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x8 over 3m)    kubelet          Node pause-815300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x7 over 3m)    kubelet          Node pause-815300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m49s              kubelet          Node pause-815300 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m49s              kubelet          Node pause-815300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m49s              kubelet          Node pause-815300 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m49s              kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m49s              kubelet          Starting kubelet.
	  Normal  NodeReady                2m48s              kubelet          Node pause-815300 status is now: NodeReady
	  Normal  RegisteredNode           2m38s              node-controller  Node pause-815300 event: Registered Node pause-815300 in Controller
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s (x8 over 34s)  kubelet          Node pause-815300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 34s)  kubelet          Node pause-815300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x7 over 34s)  kubelet          Node pause-815300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14s                node-controller  Node pause-815300 event: Registered Node pause-815300 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.170660] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.172421] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +2.906819] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.649854] systemd-fstab-generator[1139]: Ignoring "noauto" for root device
	[  +0.160855] systemd-fstab-generator[1150]: Ignoring "noauto" for root device
	[  +0.150048] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.143415] systemd-fstab-generator[1172]: Ignoring "noauto" for root device
	[  +0.172770] systemd-fstab-generator[1186]: Ignoring "noauto" for root device
	[ +18.665171] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +1.467103] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.492542] systemd-fstab-generator[1611]: Ignoring "noauto" for root device
	[  +0.690233] kauditd_printk_skb: 29 callbacks suppressed
	[Jul 6 21:26] systemd-fstab-generator[2650]: Ignoring "noauto" for root device
	[ +25.931496] kauditd_printk_skb: 28 callbacks suppressed
	[Jul 6 21:27] systemd-fstab-generator[4435]: Ignoring "noauto" for root device
	[  +0.617158] systemd-fstab-generator[4471]: Ignoring "noauto" for root device
	[  +0.235390] systemd-fstab-generator[4482]: Ignoring "noauto" for root device
	[  +0.257245] systemd-fstab-generator[4495]: Ignoring "noauto" for root device
	[ +13.377258] systemd-fstab-generator[5038]: Ignoring "noauto" for root device
	[  +0.206796] systemd-fstab-generator[5049]: Ignoring "noauto" for root device
	[  +0.191492] systemd-fstab-generator[5060]: Ignoring "noauto" for root device
	[  +0.185583] systemd-fstab-generator[5071]: Ignoring "noauto" for root device
	[  +0.224250] systemd-fstab-generator[5091]: Ignoring "noauto" for root device
	[  +8.441979] kauditd_printk_skb: 29 callbacks suppressed
	[Jul 6 21:28] systemd-fstab-generator[6786]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [01e4333af66b] <==
	* {"level":"info","ts":"2023-07-06T21:28:00.574Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"39.699798ms"}
	{"level":"info","ts":"2023-07-06T21:28:00.588Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-07-06T21:28:00.634Z","caller":"etcdserver/raft.go:529","msg":"restarting local member","cluster-id":"f84c7e4a7e9102ad","local-member-id":"f70b7b8bea0a4456","commit-index":468}
	{"level":"info","ts":"2023-07-06T21:28:00.634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 switched to configuration voters=()"}
	{"level":"info","ts":"2023-07-06T21:28:00.634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 became follower at term 2"}
	{"level":"info","ts":"2023-07-06T21:28:00.635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f70b7b8bea0a4456 [peers: [], term: 2, commit: 468, applied: 0, lastindex: 468, lastterm: 2]"}
	{"level":"warn","ts":"2023-07-06T21:28:01.184Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-07-06T21:28:01.697Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":443}
	{"level":"info","ts":"2023-07-06T21:28:02.222Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-07-06T21:28:02.728Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"f70b7b8bea0a4456","timeout":"7s"}
	{"level":"info","ts":"2023-07-06T21:28:02.728Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"f70b7b8bea0a4456"}
	{"level":"info","ts":"2023-07-06T21:28:02.729Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"f70b7b8bea0a4456","local-server-version":"3.5.7","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-07-06T21:28:02.729Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 switched to configuration voters=(17801457792969229398)"}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f84c7e4a7e9102ad","local-member-id":"f70b7b8bea0a4456","added-peer-id":"f70b7b8bea0a4456","added-peer-peer-urls":["https://172.29.72.136:2380"]}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f84c7e4a7e9102ad","local-member-id":"f70b7b8bea0a4456","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T21:28:02.749Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-06T21:28:02.749Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.29.72.136:2380"}
	{"level":"info","ts":"2023-07-06T21:28:02.749Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.29.72.136:2380"}
	{"level":"info","ts":"2023-07-06T21:28:02.750Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f70b7b8bea0a4456","initial-advertise-peer-urls":["https://172.29.72.136:2380"],"listen-peer-urls":["https://172.29.72.136:2380"],"advertise-client-urls":["https://172.29.72.136:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.72.136:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-06T21:28:02.750Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	* 
	* ==> etcd [448d69e9bc92] <==
	* {"level":"info","ts":"2023-07-06T21:28:27.358Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T21:28:27.358Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T21:28:27.358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 switched to configuration voters=(17801457792969229398)"}
	{"level":"info","ts":"2023-07-06T21:28:27.359Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f84c7e4a7e9102ad","local-member-id":"f70b7b8bea0a4456","added-peer-id":"f70b7b8bea0a4456","added-peer-peer-urls":["https://172.29.72.136:2380"]}
	{"level":"info","ts":"2023-07-06T21:28:27.359Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f84c7e4a7e9102ad","local-member-id":"f70b7b8bea0a4456","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T21:28:27.359Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T21:28:27.361Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-06T21:28:27.364Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.29.72.136:2380"}
	{"level":"info","ts":"2023-07-06T21:28:27.364Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.29.72.136:2380"}
	{"level":"info","ts":"2023-07-06T21:28:27.364Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f70b7b8bea0a4456","initial-advertise-peer-urls":["https://172.29.72.136:2380"],"listen-peer-urls":["https://172.29.72.136:2380"],"advertise-client-urls":["https://172.29.72.136:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.72.136:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-06T21:28:27.364Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-06T21:28:28.569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-06T21:28:28.569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-06T21:28:28.569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 received MsgPreVoteResp from f70b7b8bea0a4456 at term 2"}
	{"level":"info","ts":"2023-07-06T21:28:28.570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 became candidate at term 3"}
	{"level":"info","ts":"2023-07-06T21:28:28.570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 received MsgVoteResp from f70b7b8bea0a4456 at term 3"}
	{"level":"info","ts":"2023-07-06T21:28:28.570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 became leader at term 3"}
	{"level":"info","ts":"2023-07-06T21:28:28.570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f70b7b8bea0a4456 elected leader f70b7b8bea0a4456 at term 3"}
	{"level":"info","ts":"2023-07-06T21:28:28.587Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f70b7b8bea0a4456","local-member-attributes":"{Name:pause-815300 ClientURLs:[https://172.29.72.136:2379]}","request-path":"/0/members/f70b7b8bea0a4456/attributes","cluster-id":"f84c7e4a7e9102ad","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-06T21:28:28.587Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T21:28:28.591Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-06T21:28:28.592Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T21:28:28.593Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.29.72.136:2379"}
	{"level":"info","ts":"2023-07-06T21:28:28.594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-06T21:28:28.594Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  21:28:57 up 4 min,  0 users,  load average: 3.15, 1.38, 0.56
	Linux pause-815300 5.10.57 #1 SMP Fri Jun 30 21:41:53 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7d127d822e85] <==
	* I0706 21:28:30.665875       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0706 21:28:30.666256       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0706 21:28:30.666290       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0706 21:28:30.758038       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0706 21:28:30.761838       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0706 21:28:30.769912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0706 21:28:30.825836       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0706 21:28:30.826752       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0706 21:28:30.826785       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0706 21:28:30.828309       1 aggregator.go:152] initial CRD sync complete...
	I0706 21:28:30.828345       1 autoregister_controller.go:141] Starting autoregister controller
	I0706 21:28:30.828353       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0706 21:28:30.828463       1 cache.go:39] Caches are synced for autoregister controller
	I0706 21:28:30.830839       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0706 21:28:30.837880       1 shared_informer.go:318] Caches are synced for configmaps
	I0706 21:28:30.840768       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0706 21:28:31.252542       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0706 21:28:31.631495       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0706 21:28:32.645866       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0706 21:28:32.667252       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0706 21:28:32.775404       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0706 21:28:32.837965       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0706 21:28:32.850532       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0706 21:28:43.580034       1 controller.go:624] quota admission added evaluator for: endpoints
	I0706 21:28:43.602773       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [f490307bd0e1] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0706 21:28:10.916395       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0706 21:28:11.787658       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0706 21:28:12.092541       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [0bc506f3a062] <==
	* I0706 21:28:43.557302       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0706 21:28:43.557499       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-815300"
	I0706 21:28:43.557562       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0706 21:28:43.557344       1 taint_manager.go:211] "Sending events to api server"
	I0706 21:28:43.557924       1 event.go:307] "Event occurred" object="pause-815300" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-815300 event: Registered Node pause-815300 in Controller"
	I0706 21:28:43.560306       1 shared_informer.go:318] Caches are synced for endpoint
	I0706 21:28:43.567728       1 shared_informer.go:318] Caches are synced for persistent volume
	I0706 21:28:43.568080       1 shared_informer.go:318] Caches are synced for HPA
	I0706 21:28:43.568149       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0706 21:28:43.568243       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0706 21:28:43.571330       1 shared_informer.go:318] Caches are synced for deployment
	I0706 21:28:43.572419       1 shared_informer.go:318] Caches are synced for disruption
	I0706 21:28:43.573562       1 shared_informer.go:318] Caches are synced for PVC protection
	I0706 21:28:43.580993       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0706 21:28:43.583459       1 shared_informer.go:318] Caches are synced for crt configmap
	I0706 21:28:43.584620       1 shared_informer.go:318] Caches are synced for stateful set
	I0706 21:28:43.586683       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0706 21:28:43.588758       1 shared_informer.go:318] Caches are synced for job
	I0706 21:28:43.590878       1 shared_informer.go:318] Caches are synced for GC
	I0706 21:28:43.609589       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0706 21:28:43.647310       1 shared_informer.go:318] Caches are synced for resource quota
	I0706 21:28:43.702553       1 shared_informer.go:318] Caches are synced for resource quota
	I0706 21:28:44.014362       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 21:28:44.022490       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 21:28:44.022544       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [d3414fa8c8b8] <==
	* I0706 21:28:00.279557       1 serving.go:348] Generated self-signed cert in-memory
	I0706 21:28:00.642465       1 controllermanager.go:187] "Starting" version="v1.27.3"
	I0706 21:28:00.642491       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 21:28:00.643914       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0706 21:28:00.644044       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0706 21:28:00.645355       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0706 21:28:00.645424       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [000c4dcb0b47] <==
	* 
	* 
	* ==> kube-proxy [e1248db6576a] <==
	* I0706 21:28:34.710562       1 node.go:141] Successfully retrieved node IP: 172.29.72.136
	I0706 21:28:34.711066       1 server_others.go:110] "Detected node IP" address="172.29.72.136"
	I0706 21:28:34.711196       1 server_others.go:554] "Using iptables proxy"
	I0706 21:28:34.751225       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0706 21:28:34.751316       1 server_others.go:192] "Using iptables Proxier"
	I0706 21:28:34.751384       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0706 21:28:34.751932       1 server.go:658] "Version info" version="v1.27.3"
	I0706 21:28:34.751970       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 21:28:34.752789       1 config.go:188] "Starting service config controller"
	I0706 21:28:34.752836       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0706 21:28:34.752863       1 config.go:97] "Starting endpoint slice config controller"
	I0706 21:28:34.752872       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0706 21:28:34.753511       1 config.go:315] "Starting node config controller"
	I0706 21:28:34.753545       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0706 21:28:34.853715       1 shared_informer.go:318] Caches are synced for node config
	I0706 21:28:34.854056       1 shared_informer.go:318] Caches are synced for service config
	I0706 21:28:34.854153       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [03848ffed74f] <==
	* W0706 21:28:28.086280       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://172.29.72.136:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.088199       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://172.29.72.136:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086395       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://172.29.72.136:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.088566       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://172.29.72.136:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.088770       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.089052       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.088951       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.29.72.136:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.089329       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.29.72.136:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086563       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://172.29.72.136:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086644       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086728       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://172.29.72.136:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086808       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086929       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.29.72.136:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.087006       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://172.29.72.136:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086480       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://172.29.72.136:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.089627       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.29.72.136:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.089769       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.089913       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://172.29.72.136:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.090155       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.090331       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.29.72.136:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.090482       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://172.29.72.136:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.090632       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.29.72.136:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:30.779380       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0706 21:28:30.779441       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0706 21:28:30.882341       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [bfbf93f0c784] <==
	* I0706 21:28:01.311335       1 serving.go:348] Generated self-signed cert in-memory
	W0706 21:28:11.835195       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://172.29.72.136:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0706 21:28:11.835303       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0706 21:28:11.835314       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0706 21:28:13.900496       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0706 21:28:13.900543       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 21:28:13.902837       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	E0706 21:28:13.902931       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-07-06 21:24:49 UTC, ends at Thu 2023-07-06 21:28:58 UTC. --
	Jul 06 21:28:31 pause-815300 kubelet[6792]: E0706 21:28:31.022112    6792 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"pause-815300\" not found"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: E0706 21:28:31.416822    6792 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-815300\" already exists" pod="kube-system/kube-scheduler-pause-815300"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: E0706 21:28:31.709862    6792 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-815300\" already exists" pod="kube-system/kube-apiserver-pause-815300"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.733781    6792 apiserver.go:52] "Watching apiserver"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.737464    6792 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.737588    6792 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.782027    6792 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.850868    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5-lib-modules\") pod \"kube-proxy-q98dz\" (UID: \"75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5\") " pod="kube-system/kube-proxy-q98dz"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851005    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5-kube-proxy\") pod \"kube-proxy-q98dz\" (UID: \"75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5\") " pod="kube-system/kube-proxy-q98dz"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851040    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86d450a5-067f-4e41-b0ab-74b6a870077e-config-volume\") pod \"coredns-5d78c9869d-pxd8p\" (UID: \"86d450a5-067f-4e41-b0ab-74b6a870077e\") " pod="kube-system/coredns-5d78c9869d-pxd8p"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851065    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5-xtables-lock\") pod \"kube-proxy-q98dz\" (UID: \"75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5\") " pod="kube-system/kube-proxy-q98dz"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851131    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4p6z\" (UniqueName: \"kubernetes.io/projected/86d450a5-067f-4e41-b0ab-74b6a870077e-kube-api-access-p4p6z\") pod \"coredns-5d78c9869d-pxd8p\" (UID: \"86d450a5-067f-4e41-b0ab-74b6a870077e\") " pod="kube-system/coredns-5d78c9869d-pxd8p"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851160    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ns9p\" (UniqueName: \"kubernetes.io/projected/75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5-kube-api-access-5ns9p\") pod \"kube-proxy-q98dz\" (UID: \"75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5\") " pod="kube-system/kube-proxy-q98dz"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851173    6792 reconciler.go:41] "Reconciler: start to sync state"
	Jul 06 21:28:32 pause-815300 kubelet[6792]: E0706 21:28:32.722502    6792 kuberuntime_manager.go:1212] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.27.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},Vo
lumeMount{Name:kube-api-access-5ns9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod kube-proxy-q98dz_kube-system(75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Jul 06 21:28:32 pause-815300 kubelet[6792]: E0706 21:28:32.722559    6792 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-q98dz" podUID=75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5
	Jul 06 21:28:32 pause-815300 kubelet[6792]: I0706 21:28:32.733849    6792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735bf401835f44dba8df946338e2cf4fd6d1df227dea807ba5b79d2225d9143f"
	Jul 06 21:28:32 pause-815300 kubelet[6792]: I0706 21:28:32.734317    6792 scope.go:115] "RemoveContainer" containerID="000c4dcb0b4749ca97de11e5a46b6d6c868faf4537f8cf251efa790d627bcca2"
	Jul 06 21:28:32 pause-815300 kubelet[6792]: E0706 21:28:32.744174    6792 kuberuntime_manager.go:1212] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.27.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},Vo
lumeMount{Name:kube-api-access-5ns9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod kube-proxy-q98dz_kube-system(75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Jul 06 21:28:32 pause-815300 kubelet[6792]: E0706 21:28:32.744215    6792 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-q98dz" podUID=75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5
	Jul 06 21:28:33 pause-815300 kubelet[6792]: I0706 21:28:33.249797    6792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aad1162fe0ff6dafc3ae50ca6cafd51f3faadf8b5426ac576a388308bdb4a49"
	Jul 06 21:28:33 pause-815300 kubelet[6792]: E0706 21:28:33.317833    6792 kuberuntime_manager.go:1212] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.10.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-p4p6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropa
gation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,}
,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod coredns-5d78c9869d-pxd8p_kube-system(86d450a5-067f-4e41-b0ab-74b6a870077e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Jul 06 21:28:33 pause-815300 kubelet[6792]: E0706 21:28:33.317885    6792 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-5d78c9869d-pxd8p" podUID=86d450a5-067f-4e41-b0ab-74b6a870077e
	Jul 06 21:28:34 pause-815300 kubelet[6792]: I0706 21:28:34.278771    6792 scope.go:115] "RemoveContainer" containerID="933f2bbb2a3f2985b7775d13d34d8075e5007f7898cde78115ea68f6d28a753d"
	Jul 06 21:28:34 pause-815300 kubelet[6792]: I0706 21:28:34.308347    6792 scope.go:115] "RemoveContainer" containerID="000c4dcb0b4749ca97de11e5a46b6d6c868faf4537f8cf251efa790d627bcca2"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-815300 -n pause-815300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-815300 -n pause-815300: (4.6092318s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-815300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-815300 -n pause-815300
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-815300 -n pause-815300: (4.8874087s)
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-815300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-815300 logs -n 25: (12.8739238s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo cat                            | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo cat                            | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | cri-dockerd --version                                |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl status containerd                          |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                              |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl cat containerd                             |                           |                   |         |                     |                     |
	|         | --no-pager                                           |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo cat                            | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo cat                            | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | containerd config dump                               |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl status crio --all                          |                           |                   |         |                     |                     |
	|         | --full --no-pager                                    |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo                                | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo find                           | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |                   |         |                     |                     |
	| ssh     | -p cilium-852700 sudo crio                           | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC |                     |
	|         | config                                               |                           |                   |         |                     |                     |
	| delete  | -p cilium-852700                                     | cilium-852700             | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC | 06 Jul 23 21:24 UTC |
	| start   | -p kubernetes-upgrade-990200                         | kubernetes-upgrade-990200 | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC | 06 Jul 23 21:27 UTC |
	|         | --memory=2200                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| ssh     | cert-options-864500 ssh                              | cert-options-864500       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC | 06 Jul 23 21:24 UTC |
	|         | openssl x509 -text -noout -in                        |                           |                   |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                |                           |                   |         |                     |                     |
	| ssh     | -p cert-options-864500 -- sudo                       | cert-options-864500       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:24 UTC | 06 Jul 23 21:25 UTC |
	|         | cat /etc/kubernetes/admin.conf                       |                           |                   |         |                     |                     |
	| delete  | -p cert-options-864500                               | cert-options-864500       | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:25 UTC | 06 Jul 23 21:25 UTC |
	| start   | -p cert-expiration-861000                            | cert-expiration-861000    | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:26 UTC | 06 Jul 23 21:27 UTC |
	|         | --memory=2048                                        |                           |                   |         |                     |                     |
	|         | --cert-expiration=8760h                              |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| start   | -p pause-815300                                      | pause-815300              | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:27 UTC | 06 Jul 23 21:28 UTC |
	|         | --alsologtostderr -v=1                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| delete  | -p cert-expiration-861000                            | cert-expiration-861000    | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:27 UTC | 06 Jul 23 21:28 UTC |
	| stop    | -p kubernetes-upgrade-990200                         | kubernetes-upgrade-990200 | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:27 UTC | 06 Jul 23 21:28 UTC |
	| start   | -p kubernetes-upgrade-990200                         | kubernetes-upgrade-990200 | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:28 UTC |                     |
	|         | --memory=2200                                        |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                         |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| start   | -p force-systemd-env-807400                          | force-systemd-env-807400  | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:28 UTC |                     |
	|         | --memory=2048                                        |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	| start   | -p stopped-upgrade-322600                            | stopped-upgrade-322600    | minikube6\jenkins | v1.30.1 | 06 Jul 23 21:29 UTC |                     |
	|         | --memory=2200                                        |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |                   |         |                     |                     |
	|         | --driver=hyperv                                      |                           |                   |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 21:29:05
	Running on machine: minikube6
	Binary: Built with gc go1.20.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 21:29:05.104064   10604 out.go:296] Setting OutFile to fd 1928 ...
	I0706 21:29:05.168518   10604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:29:05.168518   10604 out.go:309] Setting ErrFile to fd 1564...
	I0706 21:29:05.168518   10604 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:29:05.191620   10604 out.go:303] Setting JSON to false
	I0706 21:29:05.194689   10604 start.go:127] hostinfo: {"hostname":"minikube6","uptime":497082,"bootTime":1688181863,"procs":156,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 21:29:05.194689   10604 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 21:29:05.431222   10604 out.go:177] * [stopped-upgrade-322600] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 21:29:05.478726   10604 notify.go:220] Checking for updates...
	I0706 21:29:05.624993   10604 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 21:29:05.774693   10604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 21:29:01.581607   11584 main.go:141] libmachine: [stdout =====>] : 172.29.72.193
	
	I0706 21:29:01.581607   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:29:01.582039   11584 sshutil.go:53] new ssh client: &{IP:172.29.72.193 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\kubernetes-upgrade-990200\id_rsa Username:docker}
	I0706 21:29:01.681191   11584 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (1.8153852s)
	I0706 21:29:01.681232   11584 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1249 bytes)
	I0706 21:29:01.719296   11584 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0706 21:29:01.757509   11584 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0706 21:29:01.794459   11584 provision.go:86] duration metric: configureAuth took 5.8433259s
	I0706 21:29:01.794574   11584 buildroot.go:189] setting minikube options for container-runtime
	I0706 21:29:01.795249   11584 config.go:182] Loaded profile config "kubernetes-upgrade-990200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:29:01.795457   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:29:02.507152   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:29:02.507211   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:29:02.507269   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:29:03.609781   11584 main.go:141] libmachine: [stdout =====>] : 172.29.72.193
	
	I0706 21:29:03.609781   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:29:03.614149   11584 main.go:141] libmachine: Using SSH client type: native
	I0706 21:29:03.615005   11584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.72.193 22 <nil> <nil>}
	I0706 21:29:03.615005   11584 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0706 21:29:03.758550   11584 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0706 21:29:03.758640   11584 buildroot.go:70] root file system type: tmpfs
	I0706 21:29:03.758816   11584 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0706 21:29:03.758878   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:29:04.495169   11584 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 21:29:04.495345   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:29:04.495345   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-990200 ).networkadapters[0]).ipaddresses[0]
	I0706 21:29:05.553238   11584 main.go:141] libmachine: [stdout =====>] : 172.29.72.193
	
	I0706 21:29:05.553238   11584 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:29:05.557191   11584 main.go:141] libmachine: Using SSH client type: native
	I0706 21:29:05.557535   11584 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x578e20] 0x57bcc0 <nil>  [] 0s} 172.29.72.193 22 <nil> <nil>}
	I0706 21:29:05.558124   11584 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0706 21:29:05.719048   11584 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0706 21:29:05.719048   11584 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-990200 ).state
	I0706 21:29:05.973049   10604 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 21:29:06.174680   10604 out.go:177]   - MINIKUBE_LOCATION=16832
	I0706 21:29:06.310110   10604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 21:29:06.372910   10604 config.go:182] Loaded profile config "stopped-upgrade-322600": Driver=, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I0706 21:29:06.372910   10604 start_flags.go:683] config upgrade: Driver=hyperv
	I0706 21:29:06.372910   10604 start_flags.go:695] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953
	I0706 21:29:06.372910   10604 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\stopped-upgrade-322600\config.json ...
	I0706 21:29:06.517537   10604 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0706 21:29:06.578909   10604 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 21:29:08.418503   10604 out.go:177] * Using the hyperv driver based on existing profile
	I0706 21:29:08.530759   10604 start.go:297] selected driver: hyperv
	I0706 21:29:08.530759   10604 start.go:944] validating driver "hyperv" against &{Name:stopped-upgrade-322600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0
ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.71.148 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0706 21:29:08.530759   10604 start.go:955] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 21:29:08.580478   10604 cni.go:84] Creating CNI manager for ""
	I0706 21:29:08.580478   10604 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 21:29:08.580478   10604 start_flags.go:319] config:
	{Name:stopped-upgrade-322600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:hyperv Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:172.29.71.148 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 21:29:08.581258   10604 iso.go:125] acquiring lock: {Name:mk7608081596fbafb2a8a8984d26768bdf174467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 21:29:08.702546   10604 out.go:177] * Starting control plane node stopped-upgrade-322600 in cluster stopped-upgrade-322600
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-07-06 21:24:49 UTC, ends at Thu 2023-07-06 21:29:12 UTC. --
	Jul 06 21:28:27 pause-815300 cri-dockerd[5157]: time="2023-07-06T21:28:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf2028b209d6630fb1bee5062afc569ba6b6e15ef0ca75af15d552cf9dd41608/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 21:28:27 pause-815300 cri-dockerd[5157]: time="2023-07-06T21:28:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5d78c9869d-pxd8p_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"b2c8b4352a760b2606c8414590efb2b2159345c8ce2c7757ab24320846d455f0\""
	Jul 06 21:28:27 pause-815300 dockerd[4885]: time="2023-07-06T21:28:27.788162161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:28:27 pause-815300 dockerd[4885]: time="2023-07-06T21:28:27.788488769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:27 pause-815300 dockerd[4885]: time="2023-07-06T21:28:27.788592571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:28:27 pause-815300 dockerd[4885]: time="2023-07-06T21:28:27.788660573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:30 pause-815300 cri-dockerd[5157]: time="2023-07-06T21:28:30Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.500264675Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.502616827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.502825432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.503313443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.509742087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.510073694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.510241198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:28:32 pause-815300 dockerd[4885]: time="2023-07-06T21:28:32.510410102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:32 pause-815300 cri-dockerd[5157]: time="2023-07-06T21:28:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/735bf401835f44dba8df946338e2cf4fd6d1df227dea807ba5b79d2225d9143f/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 21:28:33 pause-815300 cri-dockerd[5157]: time="2023-07-06T21:28:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8aad1162fe0ff6dafc3ae50ca6cafd51f3faadf8b5426ac576a388308bdb4a49/resolv.conf as [nameserver 172.29.64.1]"
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.457170468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.457835683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.458007587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.458159590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.477959122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.478876242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.479073746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 06 21:28:34 pause-815300 dockerd[4885]: time="2023-07-06T21:28:34.479293251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	e1248db6576a7       5780543258cf0       39 seconds ago       Running             kube-proxy                2                   735bf401835f4
	06e5b783e8cac       ead0a4a53df89       39 seconds ago       Running             coredns                   2                   8aad1162fe0ff
	7d127d822e85b       08a0c939e61b7       46 seconds ago       Running             kube-apiserver            2                   bf2028b209d66
	0bc506f3a0621       7cffc01dba0e1       49 seconds ago       Running             kube-controller-manager   2                   b84b8fb1e31e3
	448d69e9bc929       86b6af7dd652c       49 seconds ago       Running             etcd                      2                   5313dba228dd8
	03848ffed74fd       41697ceeb70b3       49 seconds ago       Running             kube-scheduler            2                   b5c33f1621acc
	000c4dcb0b474       5780543258cf0       About a minute ago   Exited              kube-proxy                1                   225da473f1a17
	bfbf93f0c7843       41697ceeb70b3       About a minute ago   Exited              kube-scheduler            1                   3b5145a8fa620
	933f2bbb2a3f2       ead0a4a53df89       About a minute ago   Exited              coredns                   1                   b2c8b4352a760
	01e4333af66b9       86b6af7dd652c       About a minute ago   Exited              etcd                      1                   ea301d243ccef
	f490307bd0e1d       08a0c939e61b7       About a minute ago   Exited              kube-apiserver            1                   72d473bc5f4ce
	d3414fa8c8b8c       7cffc01dba0e1       About a minute ago   Exited              kube-controller-manager   1                   0461c9cafea21
	
	* 
	* ==> coredns [06e5b783e8ca] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 1008319858cd8849366c0f5555156ab5ce20cd98fedc211c6675234f8e435bfd28cd4ed3ec9afaafbad6dd8b85ab8681d4da6cc55eede0ec805bf7bd7719a5c3
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59553 - 19216 "HINFO IN 6553101540797816629.2077856578413489308. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.060810326s
	
	* 
	* ==> coredns [933f2bbb2a3f] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1008319858cd8849366c0f5555156ab5ce20cd98fedc211c6675234f8e435bfd28cd4ed3ec9afaafbad6dd8b85ab8681d4da6cc55eede0ec805bf7bd7719a5c3
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:57191 - 3870 "HINFO IN 8961642224020709284.3531978192126873151. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.042395298s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-815300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-815300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4d384f293eb4d1ae13e8a16440afa4ec48ef3148
	                    minikube.k8s.io/name=pause-815300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_06T21_26_08_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 06 Jul 2023 21:26:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-815300
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 06 Jul 2023 21:29:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 06 Jul 2023 21:28:30 +0000   Thu, 06 Jul 2023 21:26:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 06 Jul 2023 21:28:30 +0000   Thu, 06 Jul 2023 21:26:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 06 Jul 2023 21:28:30 +0000   Thu, 06 Jul 2023 21:26:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 06 Jul 2023 21:28:30 +0000   Thu, 06 Jul 2023 21:26:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.72.136
	  Hostname:    pause-815300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017500Ki
	  pods:               110
	System Info:
	  Machine ID:                 14f47a14c69241d6960705f28555cb66
	  System UUID:                7f134c44-7889-2847-a01d-aae5dbf5d25d
	  Boot ID:                    e5481c21-c2ed-4a33-a9ab-b182c079d05b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-pxd8p                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m55s
	  kube-system                 etcd-pause-815300                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-apiserver-pause-815300             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-controller-manager-pause-815300    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	  kube-system                 kube-proxy-q98dz                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 kube-scheduler-pause-815300             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m51s                  kube-proxy       
	  Normal  Starting                 40s                    kube-proxy       
	  Normal  Starting                 3m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m18s (x8 over 3m18s)  kubelet          Node pause-815300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s (x8 over 3m18s)  kubelet          Node pause-815300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s (x7 over 3m18s)  kubelet          Node pause-815300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     3m7s                   kubelet          Node pause-815300 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  3m7s                   kubelet          Node pause-815300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s                   kubelet          Node pause-815300 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  3m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m7s                   kubelet          Starting kubelet.
	  Normal  NodeReady                3m6s                   kubelet          Node pause-815300 status is now: NodeReady
	  Normal  RegisteredNode           2m56s                  node-controller  Node pause-815300 event: Registered Node pause-815300 in Controller
	  Normal  Starting                 52s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  52s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  51s (x8 over 52s)      kubelet          Node pause-815300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 52s)      kubelet          Node pause-815300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x7 over 52s)      kubelet          Node pause-815300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s                    node-controller  Node pause-815300 event: Registered Node pause-815300 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.170660] systemd-fstab-generator[968]: Ignoring "noauto" for root device
	[  +0.172421] systemd-fstab-generator[981]: Ignoring "noauto" for root device
	[  +2.906819] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.649854] systemd-fstab-generator[1139]: Ignoring "noauto" for root device
	[  +0.160855] systemd-fstab-generator[1150]: Ignoring "noauto" for root device
	[  +0.150048] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.143415] systemd-fstab-generator[1172]: Ignoring "noauto" for root device
	[  +0.172770] systemd-fstab-generator[1186]: Ignoring "noauto" for root device
	[ +18.665171] systemd-fstab-generator[1288]: Ignoring "noauto" for root device
	[  +1.467103] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.492542] systemd-fstab-generator[1611]: Ignoring "noauto" for root device
	[  +0.690233] kauditd_printk_skb: 29 callbacks suppressed
	[Jul 6 21:26] systemd-fstab-generator[2650]: Ignoring "noauto" for root device
	[ +25.931496] kauditd_printk_skb: 28 callbacks suppressed
	[Jul 6 21:27] systemd-fstab-generator[4435]: Ignoring "noauto" for root device
	[  +0.617158] systemd-fstab-generator[4471]: Ignoring "noauto" for root device
	[  +0.235390] systemd-fstab-generator[4482]: Ignoring "noauto" for root device
	[  +0.257245] systemd-fstab-generator[4495]: Ignoring "noauto" for root device
	[ +13.377258] systemd-fstab-generator[5038]: Ignoring "noauto" for root device
	[  +0.206796] systemd-fstab-generator[5049]: Ignoring "noauto" for root device
	[  +0.191492] systemd-fstab-generator[5060]: Ignoring "noauto" for root device
	[  +0.185583] systemd-fstab-generator[5071]: Ignoring "noauto" for root device
	[  +0.224250] systemd-fstab-generator[5091]: Ignoring "noauto" for root device
	[  +8.441979] kauditd_printk_skb: 29 callbacks suppressed
	[Jul 6 21:28] systemd-fstab-generator[6786]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [01e4333af66b] <==
	* {"level":"info","ts":"2023-07-06T21:28:00.574Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"39.699798ms"}
	{"level":"info","ts":"2023-07-06T21:28:00.588Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-07-06T21:28:00.634Z","caller":"etcdserver/raft.go:529","msg":"restarting local member","cluster-id":"f84c7e4a7e9102ad","local-member-id":"f70b7b8bea0a4456","commit-index":468}
	{"level":"info","ts":"2023-07-06T21:28:00.634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 switched to configuration voters=()"}
	{"level":"info","ts":"2023-07-06T21:28:00.634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 became follower at term 2"}
	{"level":"info","ts":"2023-07-06T21:28:00.635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft f70b7b8bea0a4456 [peers: [], term: 2, commit: 468, applied: 0, lastindex: 468, lastterm: 2]"}
	{"level":"warn","ts":"2023-07-06T21:28:01.184Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-07-06T21:28:01.697Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":443}
	{"level":"info","ts":"2023-07-06T21:28:02.222Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-07-06T21:28:02.728Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"f70b7b8bea0a4456","timeout":"7s"}
	{"level":"info","ts":"2023-07-06T21:28:02.728Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"f70b7b8bea0a4456"}
	{"level":"info","ts":"2023-07-06T21:28:02.729Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"f70b7b8bea0a4456","local-server-version":"3.5.7","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-07-06T21:28:02.729Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 switched to configuration voters=(17801457792969229398)"}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f84c7e4a7e9102ad","local-member-id":"f70b7b8bea0a4456","added-peer-id":"f70b7b8bea0a4456","added-peer-peer-urls":["https://172.29.72.136:2380"]}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f84c7e4a7e9102ad","local-member-id":"f70b7b8bea0a4456","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T21:28:02.730Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T21:28:02.749Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-06T21:28:02.749Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.29.72.136:2380"}
	{"level":"info","ts":"2023-07-06T21:28:02.749Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.29.72.136:2380"}
	{"level":"info","ts":"2023-07-06T21:28:02.750Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f70b7b8bea0a4456","initial-advertise-peer-urls":["https://172.29.72.136:2380"],"listen-peer-urls":["https://172.29.72.136:2380"],"advertise-client-urls":["https://172.29.72.136:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.72.136:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-06T21:28:02.750Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	* 
	* ==> etcd [448d69e9bc92] <==
	* {"level":"info","ts":"2023-07-06T21:28:27.359Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"f84c7e4a7e9102ad","local-member-id":"f70b7b8bea0a4456","added-peer-id":"f70b7b8bea0a4456","added-peer-peer-urls":["https://172.29.72.136:2380"]}
	{"level":"info","ts":"2023-07-06T21:28:27.359Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f84c7e4a7e9102ad","local-member-id":"f70b7b8bea0a4456","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T21:28:27.359Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-06T21:28:27.361Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-06T21:28:27.364Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"172.29.72.136:2380"}
	{"level":"info","ts":"2023-07-06T21:28:27.364Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"172.29.72.136:2380"}
	{"level":"info","ts":"2023-07-06T21:28:27.364Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"f70b7b8bea0a4456","initial-advertise-peer-urls":["https://172.29.72.136:2380"],"listen-peer-urls":["https://172.29.72.136:2380"],"advertise-client-urls":["https://172.29.72.136:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.72.136:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-06T21:28:27.364Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-06T21:28:28.569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-06T21:28:28.569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-06T21:28:28.569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 received MsgPreVoteResp from f70b7b8bea0a4456 at term 2"}
	{"level":"info","ts":"2023-07-06T21:28:28.570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 became candidate at term 3"}
	{"level":"info","ts":"2023-07-06T21:28:28.570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 received MsgVoteResp from f70b7b8bea0a4456 at term 3"}
	{"level":"info","ts":"2023-07-06T21:28:28.570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f70b7b8bea0a4456 became leader at term 3"}
	{"level":"info","ts":"2023-07-06T21:28:28.570Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f70b7b8bea0a4456 elected leader f70b7b8bea0a4456 at term 3"}
	{"level":"info","ts":"2023-07-06T21:28:28.587Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f70b7b8bea0a4456","local-member-attributes":"{Name:pause-815300 ClientURLs:[https://172.29.72.136:2379]}","request-path":"/0/members/f70b7b8bea0a4456/attributes","cluster-id":"f84c7e4a7e9102ad","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-06T21:28:28.587Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T21:28:28.591Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-06T21:28:28.592Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-06T21:28:28.593Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"172.29.72.136:2379"}
	{"level":"info","ts":"2023-07-06T21:28:28.594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-06T21:28:28.594Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-06T21:29:12.088Z","caller":"traceutil/trace.go:171","msg":"trace[361268387] transaction","detail":"{read_only:false; response_revision:535; number_of_response:1; }","duration":"102.580671ms","start":"2023-07-06T21:29:11.986Z","end":"2023-07-06T21:29:12.088Z","steps":["trace[361268387] 'process raft request'  (duration: 102.073864ms)"],"step_count":1}
	{"level":"warn","ts":"2023-07-06T21:29:12.290Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.821765ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:339"}
	{"level":"info","ts":"2023-07-06T21:29:12.290Z","caller":"traceutil/trace.go:171","msg":"trace[794827961] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:535; }","duration":"110.089668ms","start":"2023-07-06T21:29:12.180Z","end":"2023-07-06T21:29:12.290Z","steps":["trace[794827961] 'range keys from in-memory index tree'  (duration: 109.643462ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  21:29:17 up 4 min,  0 users,  load average: 2.53, 1.33, 0.55
	Linux pause-815300 5.10.57 #1 SMP Fri Jun 30 21:41:53 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [7d127d822e85] <==
	* I0706 21:28:30.665875       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0706 21:28:30.666256       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0706 21:28:30.666290       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0706 21:28:30.758038       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0706 21:28:30.761838       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0706 21:28:30.769912       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0706 21:28:30.825836       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0706 21:28:30.826752       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0706 21:28:30.826785       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0706 21:28:30.828309       1 aggregator.go:152] initial CRD sync complete...
	I0706 21:28:30.828345       1 autoregister_controller.go:141] Starting autoregister controller
	I0706 21:28:30.828353       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0706 21:28:30.828463       1 cache.go:39] Caches are synced for autoregister controller
	I0706 21:28:30.830839       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0706 21:28:30.837880       1 shared_informer.go:318] Caches are synced for configmaps
	I0706 21:28:30.840768       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0706 21:28:31.252542       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0706 21:28:31.631495       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0706 21:28:32.645866       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0706 21:28:32.667252       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0706 21:28:32.775404       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0706 21:28:32.837965       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0706 21:28:32.850532       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0706 21:28:43.580034       1 controller.go:624] quota admission added evaluator for: endpoints
	I0706 21:28:43.602773       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [f490307bd0e1] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0706 21:28:10.916395       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0706 21:28:11.787658       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0706 21:28:12.092541       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [0bc506f3a062] <==
	* I0706 21:28:43.557302       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0706 21:28:43.557499       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-815300"
	I0706 21:28:43.557562       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0706 21:28:43.557344       1 taint_manager.go:211] "Sending events to api server"
	I0706 21:28:43.557924       1 event.go:307] "Event occurred" object="pause-815300" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-815300 event: Registered Node pause-815300 in Controller"
	I0706 21:28:43.560306       1 shared_informer.go:318] Caches are synced for endpoint
	I0706 21:28:43.567728       1 shared_informer.go:318] Caches are synced for persistent volume
	I0706 21:28:43.568080       1 shared_informer.go:318] Caches are synced for HPA
	I0706 21:28:43.568149       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0706 21:28:43.568243       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0706 21:28:43.571330       1 shared_informer.go:318] Caches are synced for deployment
	I0706 21:28:43.572419       1 shared_informer.go:318] Caches are synced for disruption
	I0706 21:28:43.573562       1 shared_informer.go:318] Caches are synced for PVC protection
	I0706 21:28:43.580993       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0706 21:28:43.583459       1 shared_informer.go:318] Caches are synced for crt configmap
	I0706 21:28:43.584620       1 shared_informer.go:318] Caches are synced for stateful set
	I0706 21:28:43.586683       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0706 21:28:43.588758       1 shared_informer.go:318] Caches are synced for job
	I0706 21:28:43.590878       1 shared_informer.go:318] Caches are synced for GC
	I0706 21:28:43.609589       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0706 21:28:43.647310       1 shared_informer.go:318] Caches are synced for resource quota
	I0706 21:28:43.702553       1 shared_informer.go:318] Caches are synced for resource quota
	I0706 21:28:44.014362       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 21:28:44.022490       1 shared_informer.go:318] Caches are synced for garbage collector
	I0706 21:28:44.022544       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [d3414fa8c8b8] <==
	* I0706 21:28:00.279557       1 serving.go:348] Generated self-signed cert in-memory
	I0706 21:28:00.642465       1 controllermanager.go:187] "Starting" version="v1.27.3"
	I0706 21:28:00.642491       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 21:28:00.643914       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0706 21:28:00.644044       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0706 21:28:00.645355       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0706 21:28:00.645424       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [000c4dcb0b47] <==
	* 
	* 
	* ==> kube-proxy [e1248db6576a] <==
	* I0706 21:28:34.710562       1 node.go:141] Successfully retrieved node IP: 172.29.72.136
	I0706 21:28:34.711066       1 server_others.go:110] "Detected node IP" address="172.29.72.136"
	I0706 21:28:34.711196       1 server_others.go:554] "Using iptables proxy"
	I0706 21:28:34.751225       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0706 21:28:34.751316       1 server_others.go:192] "Using iptables Proxier"
	I0706 21:28:34.751384       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0706 21:28:34.751932       1 server.go:658] "Version info" version="v1.27.3"
	I0706 21:28:34.751970       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 21:28:34.752789       1 config.go:188] "Starting service config controller"
	I0706 21:28:34.752836       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0706 21:28:34.752863       1 config.go:97] "Starting endpoint slice config controller"
	I0706 21:28:34.752872       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0706 21:28:34.753511       1 config.go:315] "Starting node config controller"
	I0706 21:28:34.753545       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0706 21:28:34.853715       1 shared_informer.go:318] Caches are synced for node config
	I0706 21:28:34.854056       1 shared_informer.go:318] Caches are synced for service config
	I0706 21:28:34.854153       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [03848ffed74f] <==
	* W0706 21:28:28.086280       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: Get "https://172.29.72.136:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.088199       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://172.29.72.136:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086395       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://172.29.72.136:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.088566       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://172.29.72.136:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.088770       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.089052       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.088951       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.29.72.136:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.089329       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.29.72.136:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086563       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://172.29.72.136:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086644       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086728       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://172.29.72.136:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086808       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086929       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.29.72.136:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.087006       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://172.29.72.136:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:28.086480       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://172.29.72.136:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.089627       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://172.29.72.136:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.089769       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.089913       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://172.29.72.136:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.090155       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://172.29.72.136:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.090331       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.29.72.136:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.090482       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://172.29.72.136:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	E0706 21:28:28.090632       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://172.29.72.136:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 172.29.72.136:8443: connect: connection refused
	W0706 21:28:30.779380       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0706 21:28:30.779441       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0706 21:28:30.882341       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [bfbf93f0c784] <==
	* I0706 21:28:01.311335       1 serving.go:348] Generated self-signed cert in-memory
	W0706 21:28:11.835195       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://172.29.72.136:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0706 21:28:11.835303       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0706 21:28:11.835314       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0706 21:28:13.900496       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0706 21:28:13.900543       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0706 21:28:13.902837       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	E0706 21:28:13.902931       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-07-06 21:24:49 UTC, ends at Thu 2023-07-06 21:29:20 UTC. --
	Jul 06 21:28:31 pause-815300 kubelet[6792]: E0706 21:28:31.022112    6792 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"pause-815300\" not found"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: E0706 21:28:31.416822    6792 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-815300\" already exists" pod="kube-system/kube-scheduler-pause-815300"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: E0706 21:28:31.709862    6792 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-815300\" already exists" pod="kube-system/kube-apiserver-pause-815300"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.733781    6792 apiserver.go:52] "Watching apiserver"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.737464    6792 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.737588    6792 topology_manager.go:212] "Topology Admit Handler"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.782027    6792 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.850868    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5-lib-modules\") pod \"kube-proxy-q98dz\" (UID: \"75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5\") " pod="kube-system/kube-proxy-q98dz"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851005    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5-kube-proxy\") pod \"kube-proxy-q98dz\" (UID: \"75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5\") " pod="kube-system/kube-proxy-q98dz"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851040    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86d450a5-067f-4e41-b0ab-74b6a870077e-config-volume\") pod \"coredns-5d78c9869d-pxd8p\" (UID: \"86d450a5-067f-4e41-b0ab-74b6a870077e\") " pod="kube-system/coredns-5d78c9869d-pxd8p"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851065    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5-xtables-lock\") pod \"kube-proxy-q98dz\" (UID: \"75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5\") " pod="kube-system/kube-proxy-q98dz"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851131    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4p6z\" (UniqueName: \"kubernetes.io/projected/86d450a5-067f-4e41-b0ab-74b6a870077e-kube-api-access-p4p6z\") pod \"coredns-5d78c9869d-pxd8p\" (UID: \"86d450a5-067f-4e41-b0ab-74b6a870077e\") " pod="kube-system/coredns-5d78c9869d-pxd8p"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851160    6792 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ns9p\" (UniqueName: \"kubernetes.io/projected/75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5-kube-api-access-5ns9p\") pod \"kube-proxy-q98dz\" (UID: \"75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5\") " pod="kube-system/kube-proxy-q98dz"
	Jul 06 21:28:31 pause-815300 kubelet[6792]: I0706 21:28:31.851173    6792 reconciler.go:41] "Reconciler: start to sync state"
	Jul 06 21:28:32 pause-815300 kubelet[6792]: E0706 21:28:32.722502    6792 kuberuntime_manager.go:1212] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.27.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},Vo
lumeMount{Name:kube-api-access-5ns9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod kube-proxy-q98dz_kube-system(75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Jul 06 21:28:32 pause-815300 kubelet[6792]: E0706 21:28:32.722559    6792 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-q98dz" podUID=75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5
	Jul 06 21:28:32 pause-815300 kubelet[6792]: I0706 21:28:32.733849    6792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="735bf401835f44dba8df946338e2cf4fd6d1df227dea807ba5b79d2225d9143f"
	Jul 06 21:28:32 pause-815300 kubelet[6792]: I0706 21:28:32.734317    6792 scope.go:115] "RemoveContainer" containerID="000c4dcb0b4749ca97de11e5a46b6d6c868faf4537f8cf251efa790d627bcca2"
	Jul 06 21:28:32 pause-815300 kubelet[6792]: E0706 21:28:32.744174    6792 kuberuntime_manager.go:1212] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.27.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},Vo
lumeMount{Name:kube-api-access-5ns9p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod kube-proxy-q98dz_kube-system(75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Jul 06 21:28:32 pause-815300 kubelet[6792]: E0706 21:28:32.744215    6792 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-q98dz" podUID=75b2101c-7a6d-4a1b-b11c-3cd54fea8ff5
	Jul 06 21:28:33 pause-815300 kubelet[6792]: I0706 21:28:33.249797    6792 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8aad1162fe0ff6dafc3ae50ca6cafd51f3faadf8b5426ac576a388308bdb4a49"
	Jul 06 21:28:33 pause-815300 kubelet[6792]: E0706 21:28:33.317833    6792 kuberuntime_manager.go:1212] container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.10.1,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-p4p6z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropa
gation:nil,SubPathExpr:,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,}
,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod coredns-5d78c9869d-pxd8p_kube-system(86d450a5-067f-4e41-b0ab-74b6a870077e): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Jul 06 21:28:33 pause-815300 kubelet[6792]: E0706 21:28:33.317885    6792 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-5d78c9869d-pxd8p" podUID=86d450a5-067f-4e41-b0ab-74b6a870077e
	Jul 06 21:28:34 pause-815300 kubelet[6792]: I0706 21:28:34.278771    6792 scope.go:115] "RemoveContainer" containerID="933f2bbb2a3f2985b7775d13d34d8075e5007f7898cde78115ea68f6d28a753d"
	Jul 06 21:28:34 pause-815300 kubelet[6792]: I0706 21:28:34.308347    6792 scope.go:115] "RemoveContainer" containerID="000c4dcb0b4749ca97de11e5a46b6d6c868faf4537f8cf251efa790d627bcca2"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-815300 -n pause-815300
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-815300 -n pause-815300: (4.9480798s)
helpers_test.go:261: (dbg) Run:  kubectl --context pause-815300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (142.61s)

                                                
                                    

Test pass (267/302)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 11.26
4 TestDownloadOnly/v1.16.0/preload-exists 0.06
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.27.3/json-events 8.76
11 TestDownloadOnly/v1.27.3/preload-exists 0
14 TestDownloadOnly/v1.27.3/kubectl 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.37
16 TestDownloadOnly/DeleteAll 1.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 1.32
19 TestBinaryMirror 3.13
20 TestOffline 257.65
22 TestAddons/Setup 286.36
24 TestAddons/parallel/Registry 26.83
25 TestAddons/parallel/Ingress 37.76
26 TestAddons/parallel/InspektorGadget 13.29
27 TestAddons/parallel/MetricsServer 5.58
28 TestAddons/parallel/HelmTiller 25.61
30 TestAddons/parallel/CSI 67.71
31 TestAddons/parallel/Headlamp 22.06
32 TestAddons/parallel/CloudSpanner 8.2
35 TestAddons/serial/GCPAuth/Namespaces 0.44
36 TestAddons/StoppedEnableDisable 26.63
37 TestCertOptions 211.13
38 TestCertExpiration 464.7
39 TestDockerFlags 224.99
40 TestForceSystemdFlag 155.93
41 TestForceSystemdEnv 208.26
46 TestErrorSpam/setup 99.98
47 TestErrorSpam/start 4.96
48 TestErrorSpam/status 12.93
49 TestErrorSpam/pause 8.39
50 TestErrorSpam/unpause 8.69
51 TestErrorSpam/stop 29.69
54 TestFunctional/serial/CopySyncFile 0.03
55 TestFunctional/serial/StartWithProxy 152.28
56 TestFunctional/serial/AuditLog 0
57 TestFunctional/serial/SoftStart 63.19
58 TestFunctional/serial/KubeContext 0.17
59 TestFunctional/serial/KubectlGetPods 0.27
62 TestFunctional/serial/CacheCmd/cache/add_remote 13.77
63 TestFunctional/serial/CacheCmd/cache/add_local 5.51
64 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.22
65 TestFunctional/serial/CacheCmd/cache/list 0.23
66 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 3.41
67 TestFunctional/serial/CacheCmd/cache/cache_reload 13.94
68 TestFunctional/serial/CacheCmd/cache/delete 0.44
69 TestFunctional/serial/MinikubeKubectlCmd 0.42
70 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.51
71 TestFunctional/serial/ExtraConfig 69.72
72 TestFunctional/serial/ComponentHealth 0.22
73 TestFunctional/serial/LogsCmd 3.76
74 TestFunctional/serial/LogsFileCmd 4.36
75 TestFunctional/serial/InvalidService 10.55
77 TestFunctional/parallel/ConfigCmd 1.5
79 TestFunctional/parallel/DryRun 4.05
80 TestFunctional/parallel/InternationalLanguage 2.56
81 TestFunctional/parallel/StatusCmd 14.5
85 TestFunctional/parallel/ServiceCmdConnect 26.69
86 TestFunctional/parallel/AddonsCmd 0.63
87 TestFunctional/parallel/PersistentVolumeClaim 45.23
89 TestFunctional/parallel/SSHCmd 7.45
90 TestFunctional/parallel/CpCmd 14.52
91 TestFunctional/parallel/MySQL 66.76
92 TestFunctional/parallel/FileSync 3.37
93 TestFunctional/parallel/CertSync 22.46
97 TestFunctional/parallel/NodeLabels 0.21
99 TestFunctional/parallel/NonActiveRuntimeDisabled 4.34
101 TestFunctional/parallel/License 2.56
102 TestFunctional/parallel/ImageCommands/ImageListShort 2.69
103 TestFunctional/parallel/ImageCommands/ImageListTable 2.92
104 TestFunctional/parallel/ImageCommands/ImageListJson 2.78
105 TestFunctional/parallel/ImageCommands/ImageListYaml 2.68
106 TestFunctional/parallel/ImageCommands/ImageBuild 12.12
107 TestFunctional/parallel/ImageCommands/Setup 3.76
108 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 14.24
109 TestFunctional/parallel/Version/short 0.24
110 TestFunctional/parallel/Version/components 3.23
111 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 10.58
112 TestFunctional/parallel/ServiceCmd/DeployApp 9.6
113 TestFunctional/parallel/ProfileCmd/profile_not_create 3.56
114 TestFunctional/parallel/ProfileCmd/profile_list 3.42
115 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 16.47
116 TestFunctional/parallel/ProfileCmd/profile_json_output 3.65
117 TestFunctional/parallel/ServiceCmd/List 5.24
118 TestFunctional/parallel/ServiceCmd/JSONOutput 4.63
119 TestFunctional/parallel/ServiceCmd/HTTPS 6.39
120 TestFunctional/parallel/ImageCommands/ImageSaveToFile 5.53
121 TestFunctional/parallel/ServiceCmd/Format 6.96
122 TestFunctional/parallel/ImageCommands/ImageRemove 5.55
123 TestFunctional/parallel/ServiceCmd/URL 6.33
124 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 8.23
125 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 9.73
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 3.25
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.56
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
137 TestFunctional/parallel/DockerEnv/powershell 14.4
138 TestFunctional/parallel/UpdateContextCmd/no_changes 0.98
139 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.94
140 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.95
141 TestFunctional/delete_addon-resizer_images 0.64
142 TestFunctional/delete_my-image_image 0.17
143 TestFunctional/delete_minikube_cached_images 0.18
147 TestImageBuild/serial/Setup 105.95
148 TestImageBuild/serial/NormalBuild 4.45
149 TestImageBuild/serial/BuildWithBuildArg 4.55
150 TestImageBuild/serial/BuildWithDockerIgnore 3.16
151 TestImageBuild/serial/BuildWithSpecifiedDockerfile 2.89
154 TestIngressAddonLegacy/StartLegacyK8sCluster 127.16
156 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 25.84
157 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 2.83
158 TestIngressAddonLegacy/serial/ValidateIngressAddons 47.22
161 TestJSONOutput/start/Command 151.48
162 TestJSONOutput/start/Audit 0
164 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/pause/Command 3.21
168 TestJSONOutput/pause/Audit 0
170 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/unpause/Command 3.2
174 TestJSONOutput/unpause/Audit 0
176 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/stop/Command 22.74
180 TestJSONOutput/stop/Audit 0
182 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
184 TestErrorJSONOutput 1.33
189 TestMainNoArgs 0.19
190 TestMinikubeProfile 285.84
193 TestMountStart/serial/StartWithMountFirst 66.94
194 TestMountStart/serial/VerifyMountFirst 3.21
195 TestMountStart/serial/StartWithMountSecond 66.77
196 TestMountStart/serial/VerifyMountSecond 3.25
197 TestMountStart/serial/DeleteFirst 11.73
198 TestMountStart/serial/VerifyMountPostDelete 3.08
199 TestMountStart/serial/Stop 9.99
200 TestMountStart/serial/RestartStopped 52.27
201 TestMountStart/serial/VerifyMountPostStop 3.18
204 TestMultiNode/serial/FreshStart2Nodes 232.61
205 TestMultiNode/serial/DeployApp2Nodes 9.14
207 TestMultiNode/serial/AddNode 112.14
208 TestMultiNode/serial/ProfileList 2.89
209 TestMultiNode/serial/CopyFile 127.19
210 TestMultiNode/serial/StopNode 29.38
211 TestMultiNode/serial/StartAfterStop 87.29
213 TestMultiNode/serial/DeleteNode 26.45
214 TestMultiNode/serial/StopMultiNode 45.52
215 TestMultiNode/serial/RestartMultiNode 178.85
216 TestMultiNode/serial/ValidateNameConflict 139.03
220 TestPreload 285.66
221 TestScheduledStopWindows 197.27
228 TestKubernetesUpgrade 596.11
231 TestNoKubernetes/serial/StartNoK8sWithVersion 0.32
232 TestNoKubernetes/serial/StartWithK8s 278.47
242 TestPause/serial/Start 263.45
254 TestStoppedBinaryUpgrade/Setup 0.86
258 TestStartStop/group/old-k8s-version/serial/FirstStart 289.3
259 TestStoppedBinaryUpgrade/MinikubeLogs 6.84
261 TestStartStop/group/no-preload/serial/FirstStart 250.55
263 TestStartStop/group/embed-certs/serial/FirstStart 303.13
265 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 235.21
266 TestStartStop/group/old-k8s-version/serial/DeployApp 9.85
267 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 3.79
268 TestStartStop/group/old-k8s-version/serial/Stop 24.49
269 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 5.28
270 TestStartStop/group/old-k8s-version/serial/SecondStart 516.81
271 TestStartStop/group/no-preload/serial/DeployApp 10.83
272 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 4.17
273 TestStartStop/group/no-preload/serial/Stop 24.4
274 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 2.53
275 TestStartStop/group/no-preload/serial/SecondStart 397.12
276 TestStartStop/group/embed-certs/serial/DeployApp 9.78
277 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 4.19
278 TestStartStop/group/embed-certs/serial/Stop 29.93
279 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 2.84
280 TestStartStop/group/embed-certs/serial/SecondStart 375.92
281 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.79
282 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 4.02
283 TestStartStop/group/default-k8s-diff-port/serial/Stop 25.71
284 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 2.93
285 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 636.47
286 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
287 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.42
288 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 3.46
289 TestStartStop/group/no-preload/serial/Pause 24.52
290 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
291 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
292 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.41
293 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.73
294 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 3.75
295 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 3.54
296 TestStartStop/group/old-k8s-version/serial/Pause 26.35
297 TestStartStop/group/embed-certs/serial/Pause 26.35
299 TestStartStop/group/newest-cni/serial/FirstStart 135.31
300 TestNetworkPlugins/group/auto/Start 142.55
301 TestNetworkPlugins/group/calico/Start 264.29
302 TestStartStop/group/newest-cni/serial/DeployApp 0
303 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.73
304 TestStartStop/group/newest-cni/serial/Stop 32.56
305 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 3
306 TestStartStop/group/newest-cni/serial/SecondStart 147.29
307 TestNetworkPlugins/group/auto/KubeletFlags 3.51
308 TestNetworkPlugins/group/auto/NetCatPod 16.66
309 TestNetworkPlugins/group/auto/DNS 0.38
310 TestNetworkPlugins/group/auto/Localhost 0.36
311 TestNetworkPlugins/group/auto/HairPin 0.45
312 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 17.97
313 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.58
314 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
315 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
316 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 4.53
317 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 4.62
318 TestStartStop/group/newest-cni/serial/Pause 29.91
319 TestStartStop/group/default-k8s-diff-port/serial/Pause 29.41
320 TestNetworkPlugins/group/calico/ControllerPod 5.05
321 TestNetworkPlugins/group/calico/KubeletFlags 4.57
322 TestNetworkPlugins/group/calico/NetCatPod 16.81
323 TestNetworkPlugins/group/calico/DNS 0.49
324 TestNetworkPlugins/group/calico/Localhost 0.51
325 TestNetworkPlugins/group/calico/HairPin 0.38
326 TestNetworkPlugins/group/custom-flannel/Start 168.58
327 TestNetworkPlugins/group/false/Start 228.43
328 TestNetworkPlugins/group/kindnet/Start 281.43
329 TestNetworkPlugins/group/flannel/Start 237.98
330 TestNetworkPlugins/group/custom-flannel/KubeletFlags 4.14
331 TestNetworkPlugins/group/custom-flannel/NetCatPod 26.52
332 TestNetworkPlugins/group/custom-flannel/DNS 0.45
333 TestNetworkPlugins/group/custom-flannel/Localhost 0.36
334 TestNetworkPlugins/group/custom-flannel/HairPin 0.37
335 TestNetworkPlugins/group/false/KubeletFlags 3.91
336 TestNetworkPlugins/group/false/NetCatPod 28.7
337 TestNetworkPlugins/group/false/DNS 0.43
338 TestNetworkPlugins/group/false/Localhost 0.35
339 TestNetworkPlugins/group/false/HairPin 0.39
340 TestNetworkPlugins/group/kindnet/ControllerPod 5.04
341 TestNetworkPlugins/group/kindnet/KubeletFlags 3.93
342 TestNetworkPlugins/group/kindnet/NetCatPod 26.64
343 TestNetworkPlugins/group/kindnet/DNS 0.45
344 TestNetworkPlugins/group/kindnet/Localhost 0.38
345 TestNetworkPlugins/group/kindnet/HairPin 0.41
346 TestNetworkPlugins/group/enable-default-cni/Start 181.65
347 TestNetworkPlugins/group/flannel/ControllerPod 5.05
348 TestNetworkPlugins/group/flannel/KubeletFlags 4.18
349 TestNetworkPlugins/group/flannel/NetCatPod 16.62
350 TestNetworkPlugins/group/flannel/DNS 0.4
351 TestNetworkPlugins/group/flannel/Localhost 0.41
352 TestNetworkPlugins/group/flannel/HairPin 0.39
353 TestNetworkPlugins/group/bridge/Start 174.58
354 TestNetworkPlugins/group/kubenet/Start 153.41
355 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 3.68
356 TestNetworkPlugins/group/enable-default-cni/NetCatPod 16.66
357 TestNetworkPlugins/group/enable-default-cni/DNS 0.38
358 TestNetworkPlugins/group/enable-default-cni/Localhost 0.35
359 TestNetworkPlugins/group/enable-default-cni/HairPin 0.36
360 TestNetworkPlugins/group/bridge/KubeletFlags 3.67
361 TestNetworkPlugins/group/bridge/NetCatPod 15.63
362 TestNetworkPlugins/group/bridge/DNS 0.69
363 TestNetworkPlugins/group/bridge/Localhost 0.36
364 TestNetworkPlugins/group/bridge/HairPin 0.38
365 TestNetworkPlugins/group/kubenet/KubeletFlags 3.72
366 TestNetworkPlugins/group/kubenet/NetCatPod 15.59
367 TestNetworkPlugins/group/kubenet/DNS 0.44
368 TestNetworkPlugins/group/kubenet/Localhost 0.37
369 TestNetworkPlugins/group/kubenet/HairPin 0.39
x
+
TestDownloadOnly/v1.16.0/json-events (11.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-111800 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-111800 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=hyperv: (11.2571975s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (11.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-111800
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-111800: exit status 85 (300.4099ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-111800 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:03 UTC |          |
	|         | -p download-only-111800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 20:03:17
	Running on machine: minikube6
	Binary: Built with gc go1.20.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 20:03:17.665674    9576 out.go:296] Setting OutFile to fd 640 ...
	I0706 20:03:17.731511    9576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:03:17.731511    9576 out.go:309] Setting ErrFile to fd 644...
	I0706 20:03:17.731594    9576 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0706 20:03:17.743495    9576 root.go:312] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0706 20:03:17.753429    9576 out.go:303] Setting JSON to true
	I0706 20:03:17.757173    9576 start.go:127] hostinfo: {"hostname":"minikube6","uptime":491934,"bootTime":1688181863,"procs":143,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 20:03:17.757307    9576 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 20:03:17.773397    9576 out.go:97] [download-only-111800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 20:03:17.774589    9576 notify.go:220] Checking for updates...
	W0706 20:03:17.774589    9576 preload.go:295] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0706 20:03:17.777918    9576 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:03:17.783171    9576 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 20:03:17.789087    9576 out.go:169] MINIKUBE_LOCATION=16832
	I0706 20:03:17.794575    9576 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0706 20:03:17.800332    9576 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0706 20:03:17.801314    9576 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 20:03:19.854468    9576 out.go:97] Using the hyperv driver based on user configuration
	I0706 20:03:19.854468    9576 start.go:297] selected driver: hyperv
	I0706 20:03:19.854468    9576 start.go:944] validating driver "hyperv" against <nil>
	I0706 20:03:19.854468    9576 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0706 20:03:19.903608    9576 start_flags.go:382] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0706 20:03:19.904378    9576 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0706 20:03:19.904378    9576 cni.go:84] Creating CNI manager for ""
	I0706 20:03:19.904551    9576 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0706 20:03:19.904551    9576 start_flags.go:319] config:
	{Name:download-only-111800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-111800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:03:19.905535    9576 iso.go:125] acquiring lock: {Name:mk7608081596fbafb2a8a8984d26768bdf174467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 20:03:19.908441    9576 out.go:97] Downloading VM boot image ...
	I0706 20:03:19.908745    9576 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.30.1-1688144767-16765-amd64.iso
	I0706 20:03:22.910590    9576 out.go:97] Starting control plane node download-only-111800 in cluster download-only-111800
	I0706 20:03:22.910790    9576 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 20:03:22.963452    9576 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0706 20:03:22.964328    9576 cache.go:57] Caching tarball of preloaded images
	I0706 20:03:22.964614    9576 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 20:03:22.968346    9576 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0706 20:03:22.968346    9576 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0706 20:03:23.031186    9576 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0706 20:03:26.363608    9576 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0706 20:03:26.365487    9576 preload.go:256] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0706 20:03:27.300693    9576 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0706 20:03:27.301231    9576 profile.go:148] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-111800\config.json ...
	I0706 20:03:27.301866    9576 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-111800\config.json: {Name:mk8eb85e5ca9b4286b55256511fc30f214c25515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0706 20:03:27.303474    9576 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0706 20:03:27.304842    9576 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/windows/amd64/kubectl.exe.sha1 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.16.0/kubectl.exe
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-111800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (8.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-111800 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-111800 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=hyperv: (8.7555208s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (8.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
--- PASS: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-111800
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-111800: exit status 85 (371.1179ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-111800 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:03 UTC |          |
	|         | -p download-only-111800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	| start   | -o=json --download-only        | download-only-111800 | minikube6\jenkins | v1.30.1 | 06 Jul 23 20:03 UTC |          |
	|         | -p download-only-111800        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/06 20:03:29
	Running on machine: minikube6
	Binary: Built with gc go1.20.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0706 20:03:29.294052    9272 out.go:296] Setting OutFile to fd 716 ...
	I0706 20:03:29.351488    9272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:03:29.351488    9272 out.go:309] Setting ErrFile to fd 720...
	I0706 20:03:29.351488    9272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0706 20:03:29.362247    9272 root.go:312] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the file specified.
	I0706 20:03:29.370718    9272 out.go:303] Setting JSON to true
	I0706 20:03:29.373409    9272 start.go:127] hostinfo: {"hostname":"minikube6","uptime":491946,"bootTime":1688181863,"procs":144,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 20:03:29.373409    9272 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 20:03:29.377302    9272 out.go:97] [download-only-111800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 20:03:29.378328    9272 notify.go:220] Checking for updates...
	I0706 20:03:29.582251    9272 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:03:29.752155    9272 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 20:03:29.755027    9272 out.go:169] MINIKUBE_LOCATION=16832
	I0706 20:03:29.757887    9272 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0706 20:03:29.763662    9272 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0706 20:03:29.765143    9272 config.go:182] Loaded profile config "download-only-111800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0706 20:03:29.765143    9272 start.go:852] api.Load failed for download-only-111800: filestore "download-only-111800": Docker machine "download-only-111800" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0706 20:03:29.765143    9272 driver.go:373] Setting default libvirt URI to qemu:///system
	W0706 20:03:29.765821    9272 start.go:852] api.Load failed for download-only-111800: filestore "download-only-111800": Docker machine "download-only-111800" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0706 20:03:31.451721    9272 out.go:97] Using the hyperv driver based on existing profile
	I0706 20:03:31.451993    9272 start.go:297] selected driver: hyperv
	I0706 20:03:31.451993    9272 start.go:944] validating driver "hyperv" against &{Name:download-only-111800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-111800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:03:31.497851    9272 cni.go:84] Creating CNI manager for ""
	I0706 20:03:31.497851    9272 cni.go:152] "hyperv" driver + "docker" runtime found, recommending bridge
	I0706 20:03:31.497851    9272 start_flags.go:319] config:
	{Name:download-only-111800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-111800 Namespace:de
fault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:03:31.498599    9272 iso.go:125] acquiring lock: {Name:mk7608081596fbafb2a8a8984d26768bdf174467 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0706 20:03:31.500956    9272 out.go:97] Starting control plane node download-only-111800 in cluster download-only-111800
	I0706 20:03:31.502062    9272 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 20:03:31.549868    9272 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0706 20:03:31.549868    9272 cache.go:57] Caching tarball of preloaded images
	I0706 20:03:31.550567    9272 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0706 20:03:31.554636    9272 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0706 20:03:31.554797    9272 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ...
	I0706 20:03:31.613706    9272 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4?checksum=md5:90b30902fa911e3bcfdde5b24cedf0b2 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-111800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.37s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:187: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.1347075s)
--- PASS: TestDownloadOnly/DeleteAll (1.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (1.32s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-111800
aaa_download_only_test.go:199: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-111800: (1.3153781s)
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (1.32s)

                                                
                                    
x
+
TestBinaryMirror (3.13s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-377300 --alsologtostderr --binary-mirror http://127.0.0.1:50103 --driver=hyperv
aaa_download_only_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-377300 --alsologtostderr --binary-mirror http://127.0.0.1:50103 --driver=hyperv: (2.3148637s)
helpers_test.go:175: Cleaning up "binary-mirror-377300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-377300
--- PASS: TestBinaryMirror (3.13s)

                                                
                                    
x
+
TestOffline (257.65s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-910700 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-910700 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m31.3359237s)
helpers_test.go:175: Cleaning up "offline-docker-910700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-910700
E0706 21:19:55.142922    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 21:19:56.201612    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-910700: (46.3093566s)
--- PASS: TestOffline (257.65s)

                                                
                                    
x
+
TestAddons/Setup (286.36s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-326800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-326800 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m46.3600098s)
--- PASS: TestAddons/Setup (286.36s)

                                                
                                    
x
+
TestAddons/parallel/Registry (26.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 27.7465ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ftmnf" [523937e0-bd57-480a-ad76-a57557d088ab] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0214235s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5clph" [8e219f8c-83d7-46c5-abf3-2190f67ec934] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0348292s
addons_test.go:316: (dbg) Run:  kubectl --context addons-326800 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-326800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-326800 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (12.042658s)
addons_test.go:335: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-326800 ip
2023/07/06 20:08:55 [DEBUG] GET http://172.29.71.204:5000
addons_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-326800 addons disable registry --alsologtostderr -v=1
addons_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p addons-326800 addons disable registry --alsologtostderr -v=1: (3.4982171s)
--- PASS: TestAddons/parallel/Registry (26.83s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (37.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-326800 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-326800 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-326800 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [78aa9995-f866-43d8-ade0-2d93327f7ea3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [78aa9995-f866-43d8-ade0-2d93327f7ea3] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.0136122s
addons_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-326800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p addons-326800 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (3.3673436s)
addons_test.go:262: (dbg) Run:  kubectl --context addons-326800 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-326800 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 172.29.71.204
addons_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-326800 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p addons-326800 addons disable ingress-dns --alsologtostderr -v=1: (4.253944s)
addons_test.go:287: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-326800 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-windows-amd64.exe -p addons-326800 addons disable ingress --alsologtostderr -v=1: (10.2498496s)
--- PASS: TestAddons/parallel/Ingress (37.76s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (13.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b97j4" [ccc38719-c0a3-497e-83dc-1e1ecd72d019] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0185201s
addons_test.go:817: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-326800
addons_test.go:817: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-326800: (8.2684317s)
--- PASS: TestAddons/parallel/InspektorGadget (13.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 9.3103ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-6w5jz" [4e5af558-ffa9-4b87-b1aa-0b365f4a9ee2] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0152855s
addons_test.go:391: (dbg) Run:  kubectl --context addons-326800 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-326800 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.58s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (25.61s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 27.0747ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-9fr22" [042a5d50-c3a2-4485-9c9c-9177e15ce889] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0214878s
addons_test.go:449: (dbg) Run:  kubectl --context addons-326800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-326800 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (17.2593448s)
addons_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-326800 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:466: (dbg) Done: out/minikube-windows-amd64.exe -p addons-326800 addons disable helm-tiller --alsologtostderr -v=1: (3.2817256s)
--- PASS: TestAddons/parallel/HelmTiller (25.61s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 28.4402ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-326800 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:540: (dbg) Done: kubectl --context addons-326800 create -f testdata\csi-hostpath-driver\pvc.yaml: (1.1619536s)
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-326800 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6d758d5f-6781-461d-81df-753be698ace9] Pending
helpers_test.go:344: "task-pv-pod" [6d758d5f-6781-461d-81df-753be698ace9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6d758d5f-6781-461d-81df-753be698ace9] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 23.0255563s
addons_test.go:560: (dbg) Run:  kubectl --context addons-326800 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-326800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-326800 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-326800 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-326800 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-326800 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-326800 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-326800 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [fdd64482-1621-4303-b406-3867dfd4206f] Pending
helpers_test.go:344: "task-pv-pod-restore" [fdd64482-1621-4303-b406-3867dfd4206f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [fdd64482-1621-4303-b406-3867dfd4206f] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0149021s
addons_test.go:602: (dbg) Run:  kubectl --context addons-326800 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-326800 delete pod task-pv-pod-restore: (1.0540571s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-326800 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-326800 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-326800 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-windows-amd64.exe -p addons-326800 addons disable csi-hostpath-driver --alsologtostderr -v=1: (9.4053786s)
addons_test.go:618: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-326800 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:618: (dbg) Done: out/minikube-windows-amd64.exe -p addons-326800 addons disable volumesnapshots --alsologtostderr -v=1: (3.1474364s)
--- PASS: TestAddons/parallel/CSI (67.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (22.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-326800 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-326800 --alsologtostderr -v=1: (3.7486364s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-969jp" [816ed90b-5595-4df7-91ef-bbf48332485c] Pending
helpers_test.go:344: "headlamp-66f6498c69-969jp" [816ed90b-5595-4df7-91ef-bbf48332485c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-969jp" [816ed90b-5595-4df7-91ef-bbf48332485c] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 18.3041692s
--- PASS: TestAddons/parallel/Headlamp (22.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (8.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-fb67554b8-tjwlg" [8a4a1b1e-ad8e-4eff-a687-2cd4927a94fd] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0921136s
addons_test.go:836: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-326800
addons_test.go:836: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-326800: (3.1004384s)
--- PASS: TestAddons/parallel/CloudSpanner (8.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-326800 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-326800 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (26.63s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-326800
addons_test.go:148: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-326800: (23.4772099s)
addons_test.go:152: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-326800
addons_test.go:152: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-326800: (1.7613182s)
addons_test.go:156: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-326800
addons_test.go:161: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-326800
--- PASS: TestAddons/StoppedEnableDisable (26.63s)

                                                
                                    
x
+
TestCertOptions (211.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-864500 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-864500 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (2m39.6677592s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-864500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
E0706 21:24:56.214313    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-864500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (3.4455602s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-864500 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-864500 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-864500 -- "sudo cat /etc/kubernetes/admin.conf": (3.6445413s)
helpers_test.go:175: Cleaning up "cert-options-864500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-864500
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-864500: (44.1769559s)
--- PASS: TestCertOptions (211.13s)

                                                
                                    
x
+
TestCertExpiration (464.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-861000 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-861000 --memory=2048 --cert-expiration=3m --driver=hyperv: (2m39.7799582s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-861000 --memory=2048 --cert-expiration=8760h --driver=hyperv
E0706 21:26:31.250827    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-861000 --memory=2048 --cert-expiration=8760h --driver=hyperv: (1m17.2944789s)
helpers_test.go:175: Cleaning up "cert-expiration-861000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-861000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-861000: (47.6227881s)
--- PASS: TestCertExpiration (464.70s)

                                                
                                    
x
+
TestDockerFlags (224.99s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-630100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
E0706 21:19:39.423992    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
docker_test.go:45: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-630100 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (3m2.3516653s)
docker_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-630100 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-630100 ssh "sudo systemctl show docker --property=Environment --no-pager": (3.7409843s)
docker_test.go:61: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-630100 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-630100 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (3.8869204s)
helpers_test.go:175: Cleaning up "docker-flags-630100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-630100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-630100: (35.0106686s)
--- PASS: TestDockerFlags (224.99s)

                                                
                                    
x
+
TestForceSystemdFlag (155.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-504800 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-504800 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (1m52.8666509s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-504800 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-504800 ssh "docker info --format {{.CgroupDriver}}": (3.7436488s)
helpers_test.go:175: Cleaning up "force-systemd-flag-504800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-504800
E0706 21:18:31.915119    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-504800: (39.3211077s)
--- PASS: TestForceSystemdFlag (155.93s)

                                                
                                    
x
+
TestForceSystemdEnv (208.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-807400 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
E0706 21:28:31.918177    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
docker_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-807400 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (2m55.9125959s)
docker_test.go:104: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-807400 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-807400 ssh "docker info --format {{.CgroupDriver}}": (3.7121679s)
helpers_test.go:175: Cleaning up "force-systemd-env-807400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-807400
E0706 21:31:31.251752    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-807400: (28.6378308s)
--- PASS: TestForceSystemdEnv (208.26s)

                                                
                                    
x
+
TestErrorSpam/setup (99.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-154600 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-154600 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 --driver=hyperv: (1m39.9749474s)
error_spam_test.go:91: acceptable stderr: "! C:\\ProgramData\\chocolatey\\bin\\kubectl.exe is version 1.18.2, which may have incompatibilities with Kubernetes 1.27.3."
--- PASS: TestErrorSpam/setup (99.98s)

                                                
                                    
x
+
TestErrorSpam/start (4.96s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 start --dry-run: (1.6431381s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 start --dry-run: (1.6600571s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 start --dry-run: (1.6562907s)
--- PASS: TestErrorSpam/start (4.96s)

                                                
                                    
x
+
TestErrorSpam/status (12.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 status: (4.4284018s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 status: (4.3348347s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 status: (4.1619719s)
--- PASS: TestErrorSpam/status (12.93s)

                                                
                                    
x
+
TestErrorSpam/pause (8.39s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 pause: (3.0014721s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 pause: (2.7051769s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 pause: (2.6813468s)
--- PASS: TestErrorSpam/pause (8.39s)

                                                
                                    
x
+
TestErrorSpam/unpause (8.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 unpause: (2.9552608s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 unpause
E0706 20:13:31.892316    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:13:31.910040    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:13:31.930259    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:13:31.963215    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:13:32.021101    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:13:32.112104    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:13:32.283267    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:13:32.605283    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:13:33.253930    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 unpause: (2.8975553s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 unpause
E0706 20:13:34.536459    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:13:37.110411    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 unpause: (2.8337485s)
--- PASS: TestErrorSpam/unpause (8.69s)

                                                
                                    
x
+
TestErrorSpam/stop (29.69s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 stop
E0706 20:13:42.246034    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:13:52.494621    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 stop: (16.9948147s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 stop: (7.096765s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-154600 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-154600 stop: (5.594209s)
--- PASS: TestErrorSpam/stop (29.69s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\8256\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (152.28s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-121800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0706 20:14:12.987065    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:14:53.959310    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:16:15.884139    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-121800 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (2m32.2781577s)
--- PASS: TestFunctional/serial/StartWithProxy (152.28s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (63.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-121800 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-121800 --alsologtostderr -v=8: (1m3.190221s)
functional_test.go:659: soft start took 1m3.1934856s for "functional-121800" cluster.
--- PASS: TestFunctional/serial/SoftStart (63.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.17s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-121800 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (13.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 cache add registry.k8s.io/pause:3.1: (5.2588243s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 cache add registry.k8s.io/pause:3.3: (4.2385272s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 cache add registry.k8s.io/pause:latest: (4.2717767s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (13.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (5.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-121800 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2867248567\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-121800 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2867248567\001: (1.5861226s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 cache add minikube-local-cache-test:functional-121800
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 cache add minikube-local-cache-test:functional-121800: (3.4783128s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 cache delete minikube-local-cache-test:functional-121800
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-121800
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (5.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh sudo crictl images: (3.4103808s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (3.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (13.94s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh sudo docker rmi registry.k8s.io/pause:latest: (3.435656s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-121800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (3.383544s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 cache reload: (3.7328401s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (3.3825505s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (13.94s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.44s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 kubectl -- --context functional-121800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.42s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out\kubectl.exe --context functional-121800 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.51s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (69.72s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-121800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0706 20:18:31.887043    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:18:59.731752    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-121800 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m9.719467s)
functional_test.go:757: restart took 1m9.7209942s for "functional-121800" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (69.72s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-121800 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.22s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 logs: (3.7552875s)
--- PASS: TestFunctional/serial/LogsCmd (3.76s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (4.36s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd324281179\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd324281179\001\logs.txt: (4.3557776s)
--- PASS: TestFunctional/serial/LogsFileCmd (4.36s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (10.55s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-121800 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-121800
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-121800: exit status 115 (6.0112966s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.29.71.209:30535 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-121800 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-121800 delete -f testdata\invalidsvc.yaml: (1.1397725s)
--- PASS: TestFunctional/serial/InvalidService (10.55s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-121800 config get cpus: exit status 14 (269.1179ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-121800 config get cpus: exit status 14 (219.5452ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-121800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-121800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 23 (1.8857788s)

                                                
                                                
-- stdout --
	* [functional-121800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 20:20:22.394208   10688 out.go:296] Setting OutFile to fd 996 ...
	I0706 20:20:22.460941   10688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:20:22.460941   10688 out.go:309] Setting ErrFile to fd 832...
	I0706 20:20:22.460941   10688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:20:22.475520   10688 out.go:303] Setting JSON to false
	I0706 20:20:22.479753   10688 start.go:127] hostinfo: {"hostname":"minikube6","uptime":492959,"bootTime":1688181863,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 20:20:22.479753   10688 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 20:20:22.496264   10688 out.go:177] * [functional-121800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 20:20:22.500638   10688 notify.go:220] Checking for updates...
	I0706 20:20:22.503123   10688 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:20:22.507312   10688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 20:20:22.511482   10688 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 20:20:22.514548   10688 out.go:177]   - MINIKUBE_LOCATION=16832
	I0706 20:20:22.514794   10688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 20:20:22.517454   10688 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:20:22.520226   10688 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 20:20:24.060619   10688 out.go:177] * Using the hyperv driver based on existing profile
	I0706 20:20:24.065007   10688 start.go:297] selected driver: hyperv
	I0706 20:20:24.065007   10688 start.go:944] validating driver "hyperv" against &{Name:functional-121800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.3 ClusterName:functional-121800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.29.71.209 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:20:24.065675   10688 start.go:955] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 20:20:24.116020   10688 out.go:177] 
	W0706 20:20:24.117790   10688 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0706 20:20:24.120647   10688 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-121800 --dry-run --alsologtostderr -v=1 --driver=hyperv
functional_test.go:987: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-121800 --dry-run --alsologtostderr -v=1 --driver=hyperv: (2.1594315s)
--- PASS: TestFunctional/parallel/DryRun (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-121800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-121800 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 23 (2.5610416s)

                                                
                                                
-- stdout --
	* [functional-121800] minikube v1.30.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote hyperv basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 20:20:26.419356    2776 out.go:296] Setting OutFile to fd 352 ...
	I0706 20:20:26.480083    2776 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:20:26.480083    2776 out.go:309] Setting ErrFile to fd 932...
	I0706 20:20:26.480083    2776 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:20:26.499459    2776 out.go:303] Setting JSON to false
	I0706 20:20:26.501089    2776 start.go:127] hostinfo: {"hostname":"minikube6","uptime":492963,"bootTime":1688181863,"procs":149,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.3155 Build 19045.3155","kernelVersion":"10.0.19045.3155 Build 19045.3155","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0706 20:20:26.501089    2776 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0706 20:20:26.509829    2776 out.go:177] * [functional-121800] minikube v1.30.1 sur Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	I0706 20:20:26.517089    2776 notify.go:220] Checking for updates...
	I0706 20:20:26.519428    2776 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0706 20:20:26.524275    2776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0706 20:20:26.526870    2776 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0706 20:20:26.529621    2776 out.go:177]   - MINIKUBE_LOCATION=16832
	I0706 20:20:26.532051    2776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0706 20:20:26.534908    2776 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:20:26.535987    2776 driver.go:373] Setting default libvirt URI to qemu:///system
	I0706 20:20:28.742163    2776 out.go:177] * Utilisation du pilote hyperv basé sur le profil existant
	I0706 20:20:28.760135    2776 start.go:297] selected driver: hyperv
	I0706 20:20:28.760195    2776 start.go:944] validating driver "hyperv" against &{Name:functional-121800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16765/minikube-v1.30.1-1688144767-16765-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1687538068-16731@sha256:d08658afefe15fb29b5fcdace4d88182b61941d4fc6089c962f9de20073de953 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.3 ClusterName:functional-121800 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:172.29.71.209 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0706 20:20:28.760195    2776 start.go:955] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0706 20:20:28.828917    2776 out.go:177] 
	W0706 20:20:28.833341    2776 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0706 20:20:28.836648    2776 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (14.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 status: (4.7986983s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (4.513288s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 status -o json: (5.1855791s)
--- PASS: TestFunctional/parallel/StatusCmd (14.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-121800 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-121800 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-dwpqk" [ceb74604-66b4-4057-9cae-55fc70e90bed] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-dwpqk" [ceb74604-66b4-4057-9cae-55fc70e90bed] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 19.0435197s
functional_test.go:1648: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 service hello-node-connect --url
functional_test.go:1648: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 service hello-node-connect --url: (7.0561837s)
functional_test.go:1654: found endpoint for hello-node-connect: http://172.29.71.209:32593
functional_test.go:1674: http://172.29.71.209:32593: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6fb669fc84-dwpqk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.29.71.209:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.29.71.209:32593
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (26.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (45.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d6cbce26-62f1-4f1d-859a-6de0188b9077] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0205323s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-121800 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-121800 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-121800 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-121800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Done: kubectl --context functional-121800 apply -f testdata/storage-provisioner/pod.yaml: (1.9277006s)
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1e776644-b881-46b9-a486-44ab487667d3] Pending
helpers_test.go:344: "sp-pod" [1e776644-b881-46b9-a486-44ab487667d3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1e776644-b881-46b9-a486-44ab487667d3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.2295718s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-121800 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-121800 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-121800 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a0a41151-c567-42a5-86da-1b6f57a996fe] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a0a41151-c567-42a5-86da-1b6f57a996fe] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0132622s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-121800 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (45.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh "echo hello"
functional_test.go:1724: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh "echo hello": (3.9765331s)
functional_test.go:1741: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh "cat /etc/hostname"
functional_test.go:1741: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh "cat /etc/hostname": (3.4779007s)
--- PASS: TestFunctional/parallel/SSHCmd (7.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (14.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 cp testdata\cp-test.txt /home/docker/cp-test.txt: (3.669719s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh -n functional-121800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh -n functional-121800 "sudo cat /home/docker/cp-test.txt": (3.8056058s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 cp functional-121800:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1501926530\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 cp functional-121800:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd1501926530\001\cp-test.txt: (3.4690742s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh -n functional-121800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh -n functional-121800 "sudo cat /home/docker/cp-test.txt": (3.5704843s)
--- PASS: TestFunctional/parallel/CpCmd (14.52s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (66.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-121800 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-4krc7" [dea1ef03-ae94-4d00-8044-4a55866ee473] Pending
helpers_test.go:344: "mysql-7db894d786-4krc7" [dea1ef03-ae94-4d00-8044-4a55866ee473] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-4krc7" [dea1ef03-ae94-4d00-8044-4a55866ee473] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 48.0335546s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;": exit status 1 (360.3691ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;": exit status 1 (371.8814ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;": exit status 1 (795.9872ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;": exit status 1 (555.49ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;": exit status 1 (347.3994ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-121800 exec mysql-7db894d786-4krc7 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (66.76s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (3.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/8256/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /etc/test/nested/copy/8256/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /etc/test/nested/copy/8256/hosts": (3.3711874s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (3.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (22.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/8256.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /etc/ssl/certs/8256.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /etc/ssl/certs/8256.pem": (3.7888709s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/8256.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /usr/share/ca-certificates/8256.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /usr/share/ca-certificates/8256.pem": (4.2633541s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /etc/ssl/certs/51391683.0": (3.5855442s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/82562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /etc/ssl/certs/82562.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /etc/ssl/certs/82562.pem": (3.5032135s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/82562.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /usr/share/ca-certificates/82562.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /usr/share/ca-certificates/82562.pem": (3.4240253s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (3.8886529s)
--- PASS: TestFunctional/parallel/CertSync (22.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-121800 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (4.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-121800 ssh "sudo systemctl is-active crio": exit status 1 (4.3362742s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (2.5431948s)
--- PASS: TestFunctional/parallel/License (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image ls --format short --alsologtostderr: (2.6848642s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-121800 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-121800
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-121800
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-121800 image ls --format short --alsologtostderr:
I0706 20:21:49.001969    8100 out.go:296] Setting OutFile to fd 628 ...
I0706 20:21:49.062427    8100 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 20:21:49.062427    8100 out.go:309] Setting ErrFile to fd 924...
I0706 20:21:49.062427    8100 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 20:21:49.073015    8100 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 20:21:49.077511    8100 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 20:21:49.077866    8100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-121800 ).state
I0706 20:21:49.747499    8100 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0706 20:21:49.747499    8100 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:49.758265    8100 ssh_runner.go:195] Run: systemctl --version
I0706 20:21:49.758265    8100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-121800 ).state
I0706 20:21:50.422565    8100 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0706 20:21:50.422565    8100 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:50.422565    8100 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-121800 ).networkadapters[0]).ipaddresses[0]
I0706 20:21:51.383573    8100 main.go:141] libmachine: [stdout =====>] : 172.29.71.209

                                                
                                                
I0706 20:21:51.384005    8100 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:51.384448    8100 sshutil.go:53] new ssh client: &{IP:172.29.71.209 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-121800\id_rsa Username:docker}
I0706 20:21:51.484402    8100 ssh_runner.go:235] Completed: systemctl --version: (1.7261239s)
I0706 20:21:51.492266    8100 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (2.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image ls --format table --alsologtostderr: (2.9218706s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-121800 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.27.3           | 7cffc01dba0e1 | 112MB  |
| registry.k8s.io/kube-scheduler              | v1.27.3           | 41697ceeb70b3 | 58.4MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-121800 | 3fb765bdfaf31 | 30B    |
| docker.io/library/nginx                     | latest            | 021283c8eb95b | 187MB  |
| registry.k8s.io/etcd                        | 3.5.7-0           | 86b6af7dd652c | 296MB  |
| docker.io/library/nginx                     | alpine            | 4937520ae206c | 41.4MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.27.3           | 08a0c939e61b7 | 121MB  |
| registry.k8s.io/kube-proxy                  | v1.27.3           | 5780543258cf0 | 71.1MB |
| docker.io/library/mysql                     | 5.7               | 2be84dd575ee2 | 569MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/google-containers/addon-resizer      | functional-121800 | ffd4cfbbe753e | 32.9MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-121800 image ls --format table --alsologtostderr:
I0706 20:21:55.012717    2588 out.go:296] Setting OutFile to fd 708 ...
I0706 20:21:55.099358    2588 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 20:21:55.099358    2588 out.go:309] Setting ErrFile to fd 896...
I0706 20:21:55.099358    2588 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 20:21:55.113185    2588 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 20:21:55.113185    2588 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 20:21:55.114097    2588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-121800 ).state
I0706 20:21:55.819243    2588 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0706 20:21:55.819363    2588 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:55.832027    2588 ssh_runner.go:195] Run: systemctl --version
I0706 20:21:55.832027    2588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-121800 ).state
I0706 20:21:56.516247    2588 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0706 20:21:56.516247    2588 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:56.516247    2588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-121800 ).networkadapters[0]).ipaddresses[0]
I0706 20:21:57.627836    2588 main.go:141] libmachine: [stdout =====>] : 172.29.71.209

                                                
                                                
I0706 20:21:57.627886    2588 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:57.628272    2588 sshutil.go:53] new ssh client: &{IP:172.29.71.209 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-121800\id_rsa Username:docker}
I0706 20:21:57.733658    2588 ssh_runner.go:235] Completed: systemctl --version: (1.9016164s)
I0706 20:21:57.741860    2588 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (2.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image ls --format json --alsologtostderr: (2.7809415s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-121800 image ls --format json --alsologtostderr:
[{"id":"2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"569000000"},{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"71100000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e
06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"112000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-121800"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"121000000"},{"id":"021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda","repoDigests":[],"repoTags":["docker.io/library/nginx:l
atest"],"size":"187000000"},{"id":"4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41400000"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"58400000"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"296000000"},{"id":"3fb765bdfaf31ed1b3e2c119540b0c7421f39c258776f387d5aa37ffe221459e","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-121800"],"size":"30"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-121800 image ls --format json --alsologtostderr:
I0706 20:21:52.934020    8948 out.go:296] Setting OutFile to fd 680 ...
I0706 20:21:52.993652    8948 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 20:21:52.993652    8948 out.go:309] Setting ErrFile to fd 724...
I0706 20:21:52.993652    8948 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 20:21:53.007351    8948 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 20:21:53.007474    8948 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 20:21:53.008223    8948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-121800 ).state
I0706 20:21:53.673005    8948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0706 20:21:53.673005    8948 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:53.683403    8948 ssh_runner.go:195] Run: systemctl --version
I0706 20:21:53.683403    8948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-121800 ).state
I0706 20:21:54.371905    8948 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0706 20:21:54.371905    8948 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:54.371905    8948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-121800 ).networkadapters[0]).ipaddresses[0]
I0706 20:21:55.412721    8948 main.go:141] libmachine: [stdout =====>] : 172.29.71.209

                                                
                                                
I0706 20:21:55.412965    8948 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:55.413120    8948 sshutil.go:53] new ssh client: &{IP:172.29.71.209 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-121800\id_rsa Username:docker}
I0706 20:21:55.520592    8948 ssh_runner.go:235] Completed: systemctl --version: (1.8371755s)
I0706 20:21:55.528023    8948 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image ls --format yaml --alsologtostderr: (2.6815465s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-121800 image ls --format yaml --alsologtostderr:
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "112000000"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "58400000"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "296000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41400000"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "121000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "71100000"
- id: 2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "569000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3fb765bdfaf31ed1b3e2c119540b0c7421f39c258776f387d5aa37ffe221459e
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-121800
size: "30"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-121800
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-121800 image ls --format yaml --alsologtostderr:
I0706 20:21:50.230247    9740 out.go:296] Setting OutFile to fd 944 ...
I0706 20:21:50.310320    9740 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 20:21:50.310320    9740 out.go:309] Setting ErrFile to fd 776...
I0706 20:21:50.310320    9740 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 20:21:50.324571    9740 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 20:21:50.325265    9740 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 20:21:50.326034    9740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-121800 ).state
I0706 20:21:50.973804    9740 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0706 20:21:50.973804    9740 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:50.983320    9740 ssh_runner.go:195] Run: systemctl --version
I0706 20:21:50.983320    9740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-121800 ).state
I0706 20:21:51.668833    9740 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0706 20:21:51.669026    9740 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:51.669026    9740 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-121800 ).networkadapters[0]).ipaddresses[0]
I0706 20:21:52.646733    9740 main.go:141] libmachine: [stdout =====>] : 172.29.71.209

                                                
                                                
I0706 20:21:52.646733    9740 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:52.646733    9740 sshutil.go:53] new ssh client: &{IP:172.29.71.209 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-121800\id_rsa Username:docker}
I0706 20:21:52.751277    9740 ssh_runner.go:235] Completed: systemctl --version: (1.7679442s)
I0706 20:21:52.759386    9740 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (12.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-121800 ssh pgrep buildkitd: exit status 1 (3.3552538s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image build -t localhost/my-image:functional-121800 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image build -t localhost/my-image:functional-121800 testdata\build --alsologtostderr: (6.180463s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-121800 image build -t localhost/my-image:functional-121800 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in bb4eb640d036
Removing intermediate container bb4eb640d036
---> e8d71216a967
Step 3/3 : ADD content.txt /
---> 0e83b92be93b
Successfully built 0e83b92be93b
Successfully tagged localhost/my-image:functional-121800
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-121800 image build -t localhost/my-image:functional-121800 testdata\build --alsologtostderr:
I0706 20:21:55.030135     196 out.go:296] Setting OutFile to fd 932 ...
I0706 20:21:55.118993     196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 20:21:55.118993     196 out.go:309] Setting ErrFile to fd 820...
I0706 20:21:55.118993     196 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0706 20:21:55.134728     196 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 20:21:55.149338     196 config.go:182] Loaded profile config "functional-121800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0706 20:21:55.150017     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-121800 ).state
I0706 20:21:55.834388     196 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0706 20:21:55.834388     196 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:55.844610     196 ssh_runner.go:195] Run: systemctl --version
I0706 20:21:55.844610     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-121800 ).state
I0706 20:21:56.531387     196 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0706 20:21:56.531387     196 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:56.531441     196 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-121800 ).networkadapters[0]).ipaddresses[0]
I0706 20:21:57.643936     196 main.go:141] libmachine: [stdout =====>] : 172.29.71.209

                                                
                                                
I0706 20:21:57.644011     196 main.go:141] libmachine: [stderr =====>] : 
I0706 20:21:57.644351     196 sshutil.go:53] new ssh client: &{IP:172.29.71.209 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-121800\id_rsa Username:docker}
I0706 20:21:57.753472     196 ssh_runner.go:235] Completed: systemctl --version: (1.9088478s)
I0706 20:21:57.753472     196 build_images.go:151] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2079285921.tar
I0706 20:21:57.763722     196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0706 20:21:57.795577     196 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2079285921.tar
I0706 20:21:57.801947     196 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2079285921.tar: stat -c "%s %y" /var/lib/minikube/build/build.2079285921.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2079285921.tar': No such file or directory
I0706 20:21:57.802108     196 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2079285921.tar --> /var/lib/minikube/build/build.2079285921.tar (3072 bytes)
I0706 20:21:57.881511     196 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2079285921
I0706 20:21:57.905749     196 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2079285921 -xf /var/lib/minikube/build/build.2079285921.tar
I0706 20:21:57.920158     196 docker.go:339] Building image: /var/lib/minikube/build/build.2079285921
I0706 20:21:57.928472     196 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-121800 /var/lib/minikube/build/build.2079285921
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0706 20:22:01.004000     196 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-121800 /var/lib/minikube/build/build.2079285921: (3.0754501s)
I0706 20:22:01.013471     196 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2079285921
I0706 20:22:01.045166     196 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2079285921.tar
I0706 20:22:01.060294     196 build_images.go:207] Built localhost/my-image:functional-121800 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2079285921.tar
I0706 20:22:01.060410     196 build_images.go:123] succeeded building to: functional-121800
I0706 20:22:01.060410     196 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image ls: (2.5789701s)
E0706 20:23:31.885580    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (12.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.5258082s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-121800
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (14.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image load --daemon gcr.io/google-containers/addon-resizer:functional-121800 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image load --daemon gcr.io/google-containers/addon-resizer:functional-121800 --alsologtostderr: (11.4949218s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image ls: (2.7440826s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (14.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 version --short
--- PASS: TestFunctional/parallel/Version/short (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 version -o=json --components: (3.2248882s)
--- PASS: TestFunctional/parallel/Version/components (3.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (10.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image load --daemon gcr.io/google-containers/addon-resizer:functional-121800 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image load --daemon gcr.io/google-containers/addon-resizer:functional-121800 --alsologtostderr: (7.7037641s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image ls: (2.8737916s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (10.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-121800 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-121800 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-wd7h7" [f22e1a4e-f89e-416d-a0df-47d3819afb43] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-wd7h7" [f22e1a4e-f89e-416d-a0df-47d3819afb43] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.0220861s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1274: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.080386s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1309: (dbg) Done: out/minikube-windows-amd64.exe profile list: (3.1844566s)
functional_test.go:1314: Took "3.1844566s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1328: Took "234.0035ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (16.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.3821437s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-121800
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image load --daemon gcr.io/google-containers/addon-resizer:functional-121800 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image load --daemon gcr.io/google-containers/addon-resizer:functional-121800 --alsologtostderr: (10.0780543s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image ls: (2.759114s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (16.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1360: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (3.3577924s)
functional_test.go:1365: Took "3.3579391s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1378: Took "291.6593ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (5.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 service list
functional_test.go:1458: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 service list: (5.2367775s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (5.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (4.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 service list -o json: (4.630006s)
functional_test.go:1493: Took "4.6302144s" to run "out/minikube-windows-amd64.exe -p functional-121800 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (6.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 service --namespace=default --https --url hello-node: (6.3902489s)
functional_test.go:1521: found endpoint: https://172.29.71.209:30624
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (6.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image save gcr.io/google-containers/addon-resizer:functional-121800 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image save gcr.io/google-containers/addon-resizer:functional-121800 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (5.5262442s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (5.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (6.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 service hello-node --url --format={{.IP}}: (6.9559001s)
--- PASS: TestFunctional/parallel/ServiceCmd/Format (6.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (5.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image rm gcr.io/google-containers/addon-resizer:functional-121800 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image rm gcr.io/google-containers/addon-resizer:functional-121800 --alsologtostderr: (2.7908292s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image ls: (2.7601519s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (5.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (6.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 service hello-node --url
functional_test.go:1558: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 service hello-node --url: (6.330138s)
functional_test.go:1564: found endpoint for hello-node: http://172.29.71.209:30624
--- PASS: TestFunctional/parallel/ServiceCmd/URL (6.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (5.2982559s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image ls: (2.934346s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-121800
functional_test.go:418: (dbg) Done: docker rmi gcr.io/google-containers/addon-resizer:functional-121800: (1.0886411s)
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 image save --daemon gcr.io/google-containers/addon-resizer:functional-121800 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-121800 image save --daemon gcr.io/google-containers/addon-resizer:functional-121800 --alsologtostderr: (8.4348894s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-121800
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (9.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (3.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-121800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-121800 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-121800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 10892: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 4184: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-121800 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (3.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-121800 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-121800 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bfe76101-d807-4d70-b9a4-a85d440c428b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bfe76101-d807-4d70-b9a4-a85d440c428b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.032404s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-121800 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 6240: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (14.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-121800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-121800"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-121800 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-121800": (9.1752732s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-121800 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-121800 docker-env | Invoke-Expression ; docker images": (5.2141419s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (14.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-121800 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.95s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.64s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-121800
--- PASS: TestFunctional/delete_addon-resizer_images (0.64s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-121800
--- PASS: TestFunctional/delete_my-image_image (0.17s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-121800
--- PASS: TestFunctional/delete_minikube_cached_images (0.18s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (105.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-877200 --driver=hyperv
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-877200 --driver=hyperv: (1m45.9456363s)
--- PASS: TestImageBuild/serial/Setup (105.95s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-877200
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-877200: (4.4511734s)
--- PASS: TestImageBuild/serial/NormalBuild (4.45s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (4.55s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-877200
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-877200: (4.5522786s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (4.55s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (3.16s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-877200
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-877200: (3.1635446s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (3.16s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-877200
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-877200: (2.8913976s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (2.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (127.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-windows-amd64.exe start -p ingress-addon-legacy-927000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv
E0706 20:29:55.105909    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:29:56.179836    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:29:56.195668    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:29:56.211202    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:29:56.242464    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:29:56.289771    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:29:56.385145    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:29:56.558513    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:29:56.888437    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:29:57.534112    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:29:58.827619    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:30:01.393351    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:30:06.523522    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:30:16.787311    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:30:37.279690    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-windows-amd64.exe start -p ingress-addon-legacy-927000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=hyperv: (2m7.1571861s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (127.16s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (25.84s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 addons enable ingress --alsologtostderr -v=5
E0706 20:31:18.245741    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 addons enable ingress --alsologtostderr -v=5: (25.8438247s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (25.84s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (2.83s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 addons enable ingress-dns --alsologtostderr -v=5: (2.8247667s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (2.83s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.22s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-927000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-927000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (7.682549s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-927000 replace --force -f testdata\nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-927000 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2cf56f67-7544-4507-8eba-b1ceedfaa701] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2cf56f67-7544-4507-8eba-b1ceedfaa701] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 19.0287468s
addons_test.go:238: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (3.1542092s)
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-927000 replace --force -f testdata\ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 172.29.75.26
addons_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 addons disable ingress-dns --alsologtostderr -v=1: (4.9704308s)
addons_test.go:287: (dbg) Run:  out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-windows-amd64.exe -p ingress-addon-legacy-927000 addons disable ingress --alsologtostderr -v=1: (9.8207063s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (151.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-033500 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0706 20:33:31.896209    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:34:56.177656    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:35:24.025416    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-033500 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (2m31.4808895s)
--- PASS: TestJSONOutput/start/Command (151.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (3.21s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-033500 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-033500 --output=json --user=testUser: (3.2128123s)
--- PASS: TestJSONOutput/pause/Command (3.21s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (3.2s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-033500 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-033500 --output=json --user=testUser: (3.1963269s)
--- PASS: TestJSONOutput/unpause/Command (3.20s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (22.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-033500 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-033500 --output=json --user=testUser: (22.7431812s)
--- PASS: TestJSONOutput/stop/Command (22.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-702200 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-702200 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (233.1897ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c6d4c861-1cee-4c3e-b3e6-f529a9e7fed6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-702200] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"70e939fb-43ce-4faf-b92d-d8358e87183a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"aa62f0da-0700-479f-bb23-ad053883f5b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a0a220b2-f5b0-4b4f-9713-7d65a415b226","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"e0f84e63-adfb-4acb-8ae3-f6179e07def8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16832"}}
	{"specversion":"1.0","id":"993e8400-30ef-4508-9985-feac4fbf6aa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e04f3111-65b6-4b39-9d26-0db1b0ab7cc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-702200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-702200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-702200: (1.0988042s)
--- PASS: TestErrorJSONOutput (1.33s)

                                                
                                    
x
+
TestMainNoArgs (0.19s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.19s)

                                                
                                    
x
+
TestMinikubeProfile (285.84s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-938000 --driver=hyperv
E0706 20:36:31.232954    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:31.248526    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:31.263104    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:31.294533    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:31.341934    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:31.434820    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:31.606725    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:31.936062    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:32.584204    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:33.879644    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:36.441120    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:41.575490    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:36:51.824403    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:37:12.312308    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:37:53.273580    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-938000 --driver=hyperv: (1m43.8367473s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-938000 --driver=hyperv
E0706 20:38:31.887090    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:39:15.202007    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-938000 --driver=hyperv: (1m46.6352987s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-938000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (5.5193471s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-938000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
E0706 20:39:56.178263    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (5.5582504s)
helpers_test.go:175: Cleaning up "second-938000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-938000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-938000: (29.334861s)
helpers_test.go:175: Cleaning up "first-938000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-938000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-938000: (34.1274275s)
--- PASS: TestMinikubeProfile (285.84s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (66.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-141900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0706 20:41:31.242942    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:41:59.054583    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-141900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (1m5.9287972s)
--- PASS: TestMountStart/serial/StartWithMountFirst (66.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (3.21s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-141900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-141900 ssh -- ls /minikube-host: (3.2101927s)
--- PASS: TestMountStart/serial/VerifyMountFirst (3.21s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (66.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-141900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-141900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (1m5.7691718s)
--- PASS: TestMountStart/serial/StartWithMountSecond (66.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (3.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-141900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-141900 ssh -- ls /minikube-host: (3.2515708s)
--- PASS: TestMountStart/serial/VerifyMountSecond (3.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (11.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-141900 --alsologtostderr -v=5
E0706 20:43:31.906448    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-141900 --alsologtostderr -v=5: (11.7305536s)
--- PASS: TestMountStart/serial/DeleteFirst (11.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (3.08s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-141900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-141900 ssh -- ls /minikube-host: (3.0787075s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (3.08s)

                                                
                                    
x
+
TestMountStart/serial/Stop (9.99s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-141900
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-141900: (9.9890512s)
--- PASS: TestMountStart/serial/Stop (9.99s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (52.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-141900
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-141900: (51.2604379s)
--- PASS: TestMountStart/serial/RestartStopped (52.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (3.18s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-141900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-141900 ssh -- ls /minikube-host: (3.1847076s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (232.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-144300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0706 20:46:19.401648    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 20:46:31.243893    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 20:46:35.121249    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 20:48:31.892922    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
multinode_test.go:85: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-144300 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (3m44.040052s)
multinode_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr
multinode_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr: (8.56976s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (232.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- rollout status deployment/busybox: (3.7711163s)
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-47tnt -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-47tnt -- nslookup kubernetes.io: (1.6361869s)
multinode_test.go:524: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-qp6pw -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-47tnt -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-qp6pw -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-47tnt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-144300 -- exec busybox-67b7f59bb-qp6pw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.14s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (112.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-144300 -v 3 --alsologtostderr
E0706 20:49:56.193879    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
multinode_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-144300 -v 3 --alsologtostderr: (1m39.5170378s)
multinode_test.go:116: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr
multinode_test.go:116: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr: (12.6187517s)
--- PASS: TestMultiNode/serial/AddNode (112.14s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (2.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.8852255s)
--- PASS: TestMultiNode/serial/ProfileList (2.89s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (127.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 status --output json --alsologtostderr
E0706 20:51:31.245290    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
multinode_test.go:173: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 status --output json --alsologtostderr: (12.7117363s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp testdata\cp-test.txt multinode-144300:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp testdata\cp-test.txt multinode-144300:/home/docker/cp-test.txt: (3.2635021s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test.txt": (3.3384934s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2544889846\001\cp-test_multinode-144300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2544889846\001\cp-test_multinode-144300.txt: (3.2686988s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test.txt": (3.2970879s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300:/home/docker/cp-test.txt multinode-144300-m02:/home/docker/cp-test_multinode-144300_multinode-144300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300:/home/docker/cp-test.txt multinode-144300-m02:/home/docker/cp-test_multinode-144300_multinode-144300-m02.txt: (5.8370415s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test.txt": (3.2809501s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test_multinode-144300_multinode-144300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test_multinode-144300_multinode-144300-m02.txt": (3.3146063s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300:/home/docker/cp-test.txt multinode-144300-m03:/home/docker/cp-test_multinode-144300_multinode-144300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300:/home/docker/cp-test.txt multinode-144300-m03:/home/docker/cp-test_multinode-144300_multinode-144300-m03.txt: (5.6937684s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test.txt": (3.2723403s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test_multinode-144300_multinode-144300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test_multinode-144300_multinode-144300-m03.txt": (3.4041507s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp testdata\cp-test.txt multinode-144300-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp testdata\cp-test.txt multinode-144300-m02:/home/docker/cp-test.txt: (3.317887s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test.txt": (3.3523079s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2544889846\001\cp-test_multinode-144300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2544889846\001\cp-test_multinode-144300-m02.txt: (3.3606744s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test.txt": (3.3767147s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m02:/home/docker/cp-test.txt multinode-144300:/home/docker/cp-test_multinode-144300-m02_multinode-144300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m02:/home/docker/cp-test.txt multinode-144300:/home/docker/cp-test_multinode-144300-m02_multinode-144300.txt: (5.8378171s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test.txt": (3.3359957s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test_multinode-144300-m02_multinode-144300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test_multinode-144300-m02_multinode-144300.txt": (3.3445755s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m02:/home/docker/cp-test.txt multinode-144300-m03:/home/docker/cp-test_multinode-144300-m02_multinode-144300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m02:/home/docker/cp-test.txt multinode-144300-m03:/home/docker/cp-test_multinode-144300-m02_multinode-144300-m03.txt: (5.808216s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test.txt"
E0706 20:52:54.426576    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test.txt": (3.2664975s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test_multinode-144300-m02_multinode-144300-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test_multinode-144300-m02_multinode-144300-m03.txt": (3.2897297s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp testdata\cp-test.txt multinode-144300-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp testdata\cp-test.txt multinode-144300-m03:/home/docker/cp-test.txt: (3.3060811s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test.txt": (3.3181633s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2544889846\001\cp-test_multinode-144300-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile2544889846\001\cp-test_multinode-144300-m03.txt: (3.3363861s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test.txt": (3.3675642s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m03:/home/docker/cp-test.txt multinode-144300:/home/docker/cp-test_multinode-144300-m03_multinode-144300.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m03:/home/docker/cp-test.txt multinode-144300:/home/docker/cp-test_multinode-144300-m03_multinode-144300.txt: (5.7020908s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test.txt": (3.3002895s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test_multinode-144300-m03_multinode-144300.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300 "sudo cat /home/docker/cp-test_multinode-144300-m03_multinode-144300.txt": (3.3353978s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m03:/home/docker/cp-test.txt multinode-144300-m02:/home/docker/cp-test_multinode-144300-m03_multinode-144300-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 cp multinode-144300-m03:/home/docker/cp-test.txt multinode-144300-m02:/home/docker/cp-test_multinode-144300-m03_multinode-144300-m02.txt: (5.9206193s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test.txt"
E0706 20:53:31.899814    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m03 "sudo cat /home/docker/cp-test.txt": (3.3966527s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test_multinode-144300-m03_multinode-144300-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 ssh -n multinode-144300-m02 "sudo cat /home/docker/cp-test_multinode-144300-m03_multinode-144300-m02.txt": (3.2215232s)
--- PASS: TestMultiNode/serial/CopyFile (127.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (29.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 node stop m03: (11.0863688s)
multinode_test.go:216: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-144300 status: exit status 7 (9.2333441s)

                                                
                                                
-- stdout --
	multinode-144300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-144300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-144300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr: exit status 7 (9.0625161s)

                                                
                                                
-- stdout --
	multinode-144300
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-144300-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-144300-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 20:53:58.547209   10416 out.go:296] Setting OutFile to fd 668 ...
	I0706 20:53:58.606446   10416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:53:58.606446   10416 out.go:309] Setting ErrFile to fd 688...
	I0706 20:53:58.606625   10416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 20:53:58.617247   10416 out.go:303] Setting JSON to false
	I0706 20:53:58.617247   10416 mustload.go:65] Loading cluster: multinode-144300
	I0706 20:53:58.617841   10416 notify.go:220] Checking for updates...
	I0706 20:53:58.618485   10416 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 20:53:58.618789   10416 status.go:255] checking status of multinode-144300 ...
	I0706 20:53:58.619124   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:53:59.280862   10416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:53:59.280862   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:53:59.280952   10416 status.go:330] multinode-144300 host status = "Running" (err=<nil>)
	I0706 20:53:59.280952   10416 host.go:66] Checking if "multinode-144300" exists ...
	I0706 20:53:59.281688   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:53:59.931544   10416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:53:59.931772   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:53:59.931772   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:54:00.887188   10416 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:54:00.887188   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:54:00.887397   10416 host.go:66] Checking if "multinode-144300" exists ...
	I0706 20:54:00.896516   10416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0706 20:54:00.896516   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 20:54:01.556837   10416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:54:01.556837   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:54:01.556837   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300 ).networkadapters[0]).ipaddresses[0]
	I0706 20:54:02.528758   10416 main.go:141] libmachine: [stdout =====>] : 172.29.70.202
	
	I0706 20:54:02.528972   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:54:02.529275   10416 sshutil.go:53] new ssh client: &{IP:172.29.70.202 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300\id_rsa Username:docker}
	I0706 20:54:02.633082   10416 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.736553s)
	I0706 20:54:02.642962   10416 ssh_runner.go:195] Run: systemctl --version
	I0706 20:54:02.664234   10416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:54:02.683434   10416 kubeconfig.go:92] found "multinode-144300" server: "https://172.29.70.202:8443"
	I0706 20:54:02.683434   10416 api_server.go:166] Checking apiserver status ...
	I0706 20:54:02.693667   10416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0706 20:54:02.719944   10416 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2000/cgroup
	I0706 20:54:02.736476   10416 api_server.go:182] apiserver freezer: "3:freezer:/kubepods/burstable/podcde174f192a25fd146cf674bbcb8ed25/67b35d14730ac347a854f8cac72336014192f32ab8fee38864c05a10f221e1f3"
	I0706 20:54:02.746480   10416 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podcde174f192a25fd146cf674bbcb8ed25/67b35d14730ac347a854f8cac72336014192f32ab8fee38864c05a10f221e1f3/freezer.state
	I0706 20:54:02.758304   10416 api_server.go:204] freezer state: "THAWED"
	I0706 20:54:02.758304   10416 api_server.go:253] Checking apiserver healthz at https://172.29.70.202:8443/healthz ...
	I0706 20:54:02.766799   10416 api_server.go:279] https://172.29.70.202:8443/healthz returned 200:
	ok
	I0706 20:54:02.767049   10416 status.go:421] multinode-144300 apiserver status = Running (err=<nil>)
	I0706 20:54:02.767049   10416 status.go:257] multinode-144300 status: &{Name:multinode-144300 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0706 20:54:02.767049   10416 status.go:255] checking status of multinode-144300-m02 ...
	I0706 20:54:02.767637   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:54:03.455847   10416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:54:03.456141   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:54:03.456141   10416 status.go:330] multinode-144300-m02 host status = "Running" (err=<nil>)
	I0706 20:54:03.456141   10416 host.go:66] Checking if "multinode-144300-m02" exists ...
	I0706 20:54:03.457058   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:54:04.121797   10416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:54:04.121797   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:54:04.121797   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:54:05.096873   10416 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:54:05.096873   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:54:05.096961   10416 host.go:66] Checking if "multinode-144300-m02" exists ...
	I0706 20:54:05.106256   10416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0706 20:54:05.106256   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 20:54:05.763419   10416 main.go:141] libmachine: [stdout =====>] : Running
	
	I0706 20:54:05.763644   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:54:05.763644   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-144300-m02 ).networkadapters[0]).ipaddresses[0]
	I0706 20:54:06.701766   10416 main.go:141] libmachine: [stdout =====>] : 172.29.79.241
	
	I0706 20:54:06.701766   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:54:06.701766   10416 sshutil.go:53] new ssh client: &{IP:172.29.79.241 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-144300-m02\id_rsa Username:docker}
	I0706 20:54:06.818772   10416 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.7125035s)
	I0706 20:54:06.828722   10416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0706 20:54:06.845724   10416 status.go:257] multinode-144300-m02 status: &{Name:multinode-144300-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0706 20:54:06.845724   10416 status.go:255] checking status of multinode-144300-m03 ...
	I0706 20:54:06.846846   10416 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m03 ).state
	I0706 20:54:07.476765   10416 main.go:141] libmachine: [stdout =====>] : Off
	
	I0706 20:54:07.476765   10416 main.go:141] libmachine: [stderr =====>] : 
	I0706 20:54:07.476765   10416 status.go:330] multinode-144300-m03 host status = "Stopped" (err=<nil>)
	I0706 20:54:07.476765   10416 status.go:343] host is not running, skipping remaining checks
	I0706 20:54:07.476855   10416 status.go:257] multinode-144300-m03 status: &{Name:multinode-144300-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (29.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (87.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 node start m03 --alsologtostderr
E0706 20:54:56.198573    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
multinode_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 node start m03 --alsologtostderr: (1m14.5123289s)
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 status
multinode_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 status: (12.5741805s)
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (87.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (26.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 node delete m03: (17.5912369s)
multinode_test.go:400: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr
multinode_test.go:400: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr: (8.4436696s)
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (26.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (45.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 stop
E0706 21:01:31.246534    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
multinode_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 stop: (42.6597222s)
multinode_test.go:320: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-144300 status: exit status 7 (1.4132537s)

                                                
                                                
-- stdout --
	multinode-144300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-144300-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr: exit status 7 (1.4445044s)

                                                
                                                
-- stdout --
	multinode-144300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-144300-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0706 21:01:57.253842    7588 out.go:296] Setting OutFile to fd 960 ...
	I0706 21:01:57.309469    7588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:01:57.309469    7588 out.go:309] Setting ErrFile to fd 700...
	I0706 21:01:57.309469    7588 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0706 21:01:57.320128    7588 out.go:303] Setting JSON to false
	I0706 21:01:57.320128    7588 mustload.go:65] Loading cluster: multinode-144300
	I0706 21:01:57.320128    7588 notify.go:220] Checking for updates...
	I0706 21:01:57.321143    7588 config.go:182] Loaded profile config "multinode-144300": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0706 21:01:57.321143    7588 status.go:255] checking status of multinode-144300 ...
	I0706 21:01:57.321985    7588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300 ).state
	I0706 21:01:57.934280    7588 main.go:141] libmachine: [stdout =====>] : Off
	
	I0706 21:01:57.934280    7588 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:01:57.934361    7588 status.go:330] multinode-144300 host status = "Stopped" (err=<nil>)
	I0706 21:01:57.934361    7588 status.go:343] host is not running, skipping remaining checks
	I0706 21:01:57.934361    7588 status.go:257] multinode-144300 status: &{Name:multinode-144300 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0706 21:01:57.934488    7588 status.go:255] checking status of multinode-144300-m02 ...
	I0706 21:01:57.935487    7588 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-144300-m02 ).state
	I0706 21:01:58.574283    7588 main.go:141] libmachine: [stdout =====>] : Off
	
	I0706 21:01:58.574358    7588 main.go:141] libmachine: [stderr =====>] : 
	I0706 21:01:58.574358    7588 status.go:330] multinode-144300-m02 host status = "Stopped" (err=<nil>)
	I0706 21:01:58.574358    7588 status.go:343] host is not running, skipping remaining checks
	I0706 21:01:58.574358    7588 status.go:257] multinode-144300-m02 status: &{Name:multinode-144300-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (45.52s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (178.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-144300 --wait=true -v=8 --alsologtostderr --driver=hyperv
E0706 21:02:59.414244    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 21:03:15.135142    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 21:03:31.908436    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
multinode_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-144300 --wait=true -v=8 --alsologtostderr --driver=hyperv: (2m49.7958349s)
multinode_test.go:360: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr
E0706 21:04:56.194096    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
multinode_test.go:360: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-144300 status --alsologtostderr: (8.5806916s)
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (178.85s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (139.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-144300
multinode_test.go:452: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-144300-m02 --driver=hyperv
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-144300-m02 --driver=hyperv: exit status 14 (246.7199ms)

                                                
                                                
-- stdout --
	* [multinode-144300-m02] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-144300-m02' is duplicated with machine name 'multinode-144300-m02' in profile 'multinode-144300'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-144300-m03 --driver=hyperv
E0706 21:06:31.246499    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
multinode_test.go:460: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-144300-m03 --driver=hyperv: (1m50.0550306s)
multinode_test.go:467: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-144300
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-144300: exit status 80 (2.8270246s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-144300
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-144300-m03 already exists in multinode-144300-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_node_8a500d2181d400fd32bfc5983efc601de14422c3_18.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-144300-m03
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-144300-m03: (25.6991093s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (139.03s)

                                                
                                    
x
+
TestPreload (285.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-852700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0706 21:08:31.914450    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
E0706 21:09:34.446800    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 21:09:56.195460    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-852700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (2m17.4981971s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-852700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-852700 image pull gcr.io/k8s-minikube/busybox: (3.5444708s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-852700
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-852700: (21.4584848s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-852700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0706 21:11:31.250153    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-852700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (1m37.331021s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-852700 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-852700 image list: (2.6646022s)
helpers_test.go:175: Cleaning up "test-preload-852700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-852700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-852700: (23.1602403s)
--- PASS: TestPreload (285.66s)

                                                
                                    
x
+
TestScheduledStopWindows (197.27s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-095800 --memory=2048 --driver=hyperv
E0706 21:13:31.918008    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-095800 --memory=2048 --driver=hyperv: (1m40.5946879s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-095800 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-095800 --schedule 5m: (4.0174188s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-095800 -n scheduled-stop-095800
scheduled_stop_test.go:191: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-095800 -n scheduled-stop-095800: (4.3939513s)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-095800 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-095800 -- sudo systemctl show minikube-scheduled-stop --no-page: (3.3790177s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-095800 --schedule 5s
E0706 21:14:56.204913    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-095800 --schedule 5s: (4.1650553s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-095800
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-095800: exit status 7 (837.6279ms)

                                                
                                                
-- stdout --
	scheduled-stop-095800
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-095800 -n scheduled-stop-095800
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-095800 -n scheduled-stop-095800: exit status 7 (823.6862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-095800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-095800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-095800: (19.0495639s)
--- PASS: TestScheduledStopWindows (197.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (596.11s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-990200 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-990200 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=hyperv: (3m20.3673286s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-990200
version_upgrade_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-990200: (32.2023913s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-990200 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-990200 status --format={{.Host}}: exit status 7 (1.1722041s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-990200 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:255: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-990200 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=hyperv: (2m1.2068196s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-990200 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-990200 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-990200 --memory=2200 --kubernetes-version=v1.16.0 --driver=hyperv: exit status 106 (249.4319ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-990200] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-990200
	    minikube start -p kubernetes-upgrade-990200 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9902002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-990200 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-990200 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:287: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-990200 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=hyperv: (3m25.029267s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-990200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-990200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-990200: (35.6782273s)
--- PASS: TestKubernetesUpgrade (596.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-504800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-504800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (320.0552ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-504800] minikube v1.30.1 on Microsoft Windows 10 Enterprise N 10.0.19045.3155 Build 19045.3155
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=16832
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (278.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-504800 --driver=hyperv
E0706 21:16:31.246738    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
no_kubernetes_test.go:95: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-504800 --driver=hyperv: (4m33.6595055s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-504800 status -o json
no_kubernetes_test.go:200: (dbg) Done: out/minikube-windows-amd64.exe -p NoKubernetes-504800 status -o json: (4.8116589s)
--- PASS: TestNoKubernetes/serial/StartWithK8s (278.47s)

                                                
                                    
x
+
TestPause/serial/Start (263.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-815300 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-815300 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (4m23.4506479s)
--- PASS: TestPause/serial/Start (263.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (289.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-002700 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-002700 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0: (4m49.3001666s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (289.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (6.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-322600
version_upgrade_test.go:218: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-322600: (6.8447929s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (6.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (250.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-036100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-036100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.27.3: (4m10.5525012s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (250.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (303.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-205900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.27.3
E0706 21:33:31.915722    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-205900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.27.3: (5m3.1253033s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (303.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (235.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-730700 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-730700 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.27.3: (3m55.2070331s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (235.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-002700 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ef04be19-8eb3-4ffc-937d-17ca9d178a8a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ef04be19-8eb3-4ffc-937d-17ca9d178a8a] Running
E0706 21:34:56.212202    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.0444996s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-002700 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-002700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-002700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.4549565s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-002700 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (24.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-002700 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-002700 --alsologtostderr -v=3: (24.4872052s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (24.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-002700 -n old-k8s-version-002700
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-002700 -n old-k8s-version-002700: exit status 7 (935.2659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-002700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-002700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (4.3423496s)
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (516.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-002700 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-002700 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperv --kubernetes-version=v1.16.0: (8m32.2167959s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-002700 -n old-k8s-version-002700
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-002700 -n old-k8s-version-002700: (4.5977587s)
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (516.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-036100 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [37748458-f9a7-4405-bd50-c2dc1d856b20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [37748458-f9a7-4405-bd50-c2dc1d856b20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.03991s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-036100 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-036100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-036100 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.8318918s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-036100 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (4.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (24.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-036100 --alsologtostderr -v=3
E0706 21:36:19.437208    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 21:36:31.264820    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 21:36:35.151844    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-036100 --alsologtostderr -v=3: (24.3960095s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (24.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-036100 -n no-preload-036100
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-036100 -n no-preload-036100: exit status 7 (922.9678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-036100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-036100 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.6031497s)
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (2.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (397.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-036100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p no-preload-036100 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.27.3: (6m32.4511324s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-036100 -n no-preload-036100
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-036100 -n no-preload-036100: (4.6714486s)
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (397.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-205900 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ee12ba29-0f05-4284-b92e-3051ca2a7434] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ee12ba29-0f05-4284-b92e-3051ca2a7434] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0380729s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-205900 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-205900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-205900 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.857211s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-205900 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (29.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-205900 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-205900 --alsologtostderr -v=3: (29.9294261s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (29.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-205900 -n embed-certs-205900
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-205900 -n embed-certs-205900: exit status 7 (905.5158ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-205900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-205900 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.929251s)
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (2.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (375.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-205900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-205900 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperv --kubernetes-version=v1.27.3: (6m11.3266865s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-205900 -n embed-certs-205900
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-205900 -n embed-certs-205900: (4.5959843s)
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (375.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-730700 create -f testdata\busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d68805c6-f562-411f-88f0-a0a623f2eeff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d68805c6-f562-411f-88f0-a0a623f2eeff] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.0438058s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-730700 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-730700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-730700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.6716422s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-730700 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (4.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (25.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-730700 --alsologtostderr -v=3
E0706 21:38:31.918556    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-730700 --alsologtostderr -v=3: (25.7090882s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (25.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700: exit status 7 (1.2345405s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-730700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-730700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (1.6919465s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (636.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-730700 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.27.3
E0706 21:39:56.205884    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
E0706 21:41:31.272201    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 21:42:54.480688    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-730700 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperv --kubernetes-version=v1.27.3: (10m30.7321739s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700: (5.7346255s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (636.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-ccj2s" [392cf448-af87-445a-bb29-3e5981940b03] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.031238s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-ccj2s" [392cf448-af87-445a-bb29-3e5981940b03] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0185952s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-036100 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (3.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p no-preload-036100 "sudo crictl images -o json"
E0706 21:43:31.928148    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\addons-326800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p no-preload-036100 "sudo crictl images -o json": (3.4584233s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (3.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (24.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p no-preload-036100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p no-preload-036100 --alsologtostderr -v=1: (3.2961002s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-036100 -n no-preload-036100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-036100 -n no-preload-036100: exit status 2 (4.3849208s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-036100 -n no-preload-036100
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-036100 -n no-preload-036100: exit status 2 (4.4184627s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p no-preload-036100 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p no-preload-036100 --alsologtostderr -v=1: (3.3518356s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-036100 -n no-preload-036100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-036100 -n no-preload-036100: (4.5041806s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-036100 -n no-preload-036100
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p no-preload-036100 -n no-preload-036100: (4.5618185s)
--- PASS: TestStartStop/group/no-preload/serial/Pause (24.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gqx4f" [c47ea364-b79f-46c6-80d7-dd74bdcbfb39] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0309334s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lbdp4" [577e3e9d-0884-44ad-8972-780d39d5d2f8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026805s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-gqx4f" [c47ea364-b79f-46c6-80d7-dd74bdcbfb39] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0124714s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-002700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-lbdp4" [577e3e9d-0884-44ad-8972-780d39d5d2f8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0156656s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-205900 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (3.75s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p old-k8s-version-002700 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p old-k8s-version-002700 "sudo crictl images -o json": (3.7522306s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (3.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (3.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p embed-certs-205900 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p embed-certs-205900 "sudo crictl images -o json": (3.5346848s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (3.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (26.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-002700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-002700 --alsologtostderr -v=1: (3.811115s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-002700 -n old-k8s-version-002700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-002700 -n old-k8s-version-002700: exit status 2 (4.6947244s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-002700 -n old-k8s-version-002700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-002700 -n old-k8s-version-002700: exit status 2 (4.6210992s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-002700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-002700 --alsologtostderr -v=1: (3.3572972s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-002700 -n old-k8s-version-002700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-002700 -n old-k8s-version-002700: (5.0280405s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-002700 -n old-k8s-version-002700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-002700 -n old-k8s-version-002700: (4.8387976s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (26.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (26.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-205900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-205900 --alsologtostderr -v=1: (3.7012902s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-205900 -n embed-certs-205900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-205900 -n embed-certs-205900: exit status 2 (4.7242521s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-205900 -n embed-certs-205900
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-205900 -n embed-certs-205900: exit status 2 (4.5359837s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-205900 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-205900 --alsologtostderr -v=1: (3.5231978s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-205900 -n embed-certs-205900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-205900 -n embed-certs-205900: (4.9503665s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-205900 -n embed-certs-205900
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-205900 -n embed-certs-205900: (4.9138117s)
--- PASS: TestStartStop/group/embed-certs/serial/Pause (26.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (135.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-844700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-844700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.27.3: (2m15.3130549s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (135.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (142.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperv
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperv: (2m22.5503862s)
--- PASS: TestNetworkPlugins/group/auto/Start (142.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (264.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv
E0706 21:46:01.157747    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:01.185200    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:01.201160    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:01.230059    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:01.272942    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:01.362460    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:01.539076    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:01.874950    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:02.525142    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:03.807545    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:06.381800    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:11.514476    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:21.767457    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:46:31.262653    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
E0706 21:46:42.251860    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperv: (4m24.2908239s)
--- PASS: TestNetworkPlugins/group/calico/Start (264.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-844700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-844700 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.7255843s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (32.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-844700 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-844700 --alsologtostderr -v=3: (32.5582907s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (32.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-844700 -n newest-cni-844700
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-844700 -n newest-cni-844700: exit status 7 (874.3415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-844700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0706 21:47:23.215708    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:246: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-844700 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: (2.1241038s)
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (3.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (147.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-844700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe start -p newest-cni-844700 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperv --kubernetes-version=v1.27.3: (2m21.8838573s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-844700 -n newest-cni-844700
start_stop_delete_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-844700 -n newest-cni-844700: (5.4104807s)
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (147.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-852700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p auto-852700 "pgrep -a kubelet": (3.5136455s)
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (3.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-852700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-nbzvv" [52b8f33b-3941-48b3-a467-bf7449aa1aae] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-nbzvv" [52b8f33b-3941-48b3-a467-bf7449aa1aae] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 16.028525s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-852700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-kvn75" [0deda0c4-74e0-482e-86da-4843c3b8f258] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-kvn75" [0deda0c4-74e0-482e-86da-4843c3b8f258] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 17.9633253s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (17.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-kvn75" [0deda0c4-74e0-482e-86da-4843c3b8f258] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0314059s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-730700 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E0706 21:49:51.702116    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
E0706 21:49:51.717238    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
E0706 21:49:51.732743    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
E0706 21:49:51.757251    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
E0706 21:49:51.803680    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
E0706 21:49:51.897778    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (4.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p newest-cni-844700 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p newest-cni-844700 "sudo crictl images -o json": (4.531365s)
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (4.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (4.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-730700 "sudo crictl images -o json"
E0706 21:49:52.066902    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
E0706 21:49:52.398485    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
E0706 21:49:53.046001    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
E0706 21:49:54.335487    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:304: (dbg) Done: out/minikube-windows-amd64.exe ssh -p default-k8s-diff-port-730700 "sudo crictl images -o json": (4.6227272s)
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (4.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (29.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-844700 --alsologtostderr -v=1
E0706 21:49:56.217077    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p newest-cni-844700 --alsologtostderr -v=1: (4.4426187s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-844700 -n newest-cni-844700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-844700 -n newest-cni-844700: exit status 2 (5.3806145s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-844700 -n newest-cni-844700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-844700 -n newest-cni-844700: exit status 2 (4.9701104s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-844700 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p newest-cni-844700 --alsologtostderr -v=1: (3.6751593s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-844700 -n newest-cni-844700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-844700 -n newest-cni-844700: (5.9496859s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-844700 -n newest-cni-844700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-844700 -n newest-cni-844700: (5.4877708s)
--- PASS: TestStartStop/group/newest-cni/serial/Pause (29.91s)
E0706 22:01:25.704758    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kindnet-852700\client.crt: The system cannot find the path specified.

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (29.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-730700 --alsologtostderr -v=1
E0706 21:49:56.897364    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-730700 --alsologtostderr -v=1: (4.4463309s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700
E0706 21:50:02.031479    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700: exit status 2 (5.1556258s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700: exit status 2 (4.8883775s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-730700 --alsologtostderr -v=1
E0706 21:50:12.284355    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-730700 --alsologtostderr -v=1: (3.7442778s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700: (5.793771s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700
start_stop_delete_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-730700 -n default-k8s-diff-port-730700: (5.376969s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (29.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-pwbsb" [b90c2114-31b0-43d5-a626-16f65ce86b80] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.0407165s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (4.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-852700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p calico-852700 "pgrep -a kubelet": (4.5699772s)
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (4.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (16.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-852700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-jwx6c" [0323a6f1-b30e-4002-9576-82d8717c5309] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-jwx6c" [0323a6f1-b30e-4002-9576-82d8717c5309] Running
E0706 21:50:32.767618    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 16.022245s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (16.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-852700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (168.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=hyperv: (2m48.5769151s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (168.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (228.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv
E0706 21:51:13.729651    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
E0706 21:51:28.988017    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\no-preload-036100\client.crt: The system cannot find the path specified.
E0706 21:51:31.269324    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv: (3m48.4299182s)
--- PASS: TestNetworkPlugins/group/false/Start (228.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (281.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv
E0706 21:52:35.662501    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperv: (4m41.4322882s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (281.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (237.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv
E0706 21:53:48.598175    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\auto-852700\client.crt: The system cannot find the path specified.
E0706 21:53:51.070946    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\default-k8s-diff-port-730700\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv: (3m57.9837644s)
--- PASS: TestNetworkPlugins/group/flannel/Start (237.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-852700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p custom-flannel-852700 "pgrep -a kubelet": (4.1435726s)
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (4.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (26.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-852700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context custom-flannel-852700 replace --force -f testdata\netcat-deployment.yaml: (1.8382345s)
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-b6c6l" [63a414be-fc58-4f59-a6fd-3ee72b8a40f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-b6c6l" [63a414be-fc58-4f59-a6fd-3ee72b8a40f4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 24.0389677s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (26.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-852700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (3.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-852700 "pgrep -a kubelet"
E0706 21:54:56.212961    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\functional-121800\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p false-852700 "pgrep -a kubelet": (3.9053333s)
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (3.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (28.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-852700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-p29wx" [1559b281-029c-4f25-a4de-9111c2719643] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0706 21:55:08.376676    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:08.392691    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:08.408685    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:08.440008    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:08.488237    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:08.582380    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:08.756840    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:09.084626    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:09.841817    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:11.123901    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:13.693691    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:18.825346    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:55:19.508400    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\old-k8s-version-002700\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-p29wx" [1559b281-029c-4f25-a4de-9111c2719643] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 28.0250602s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (28.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-852700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0706 21:55:29.078142    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
--- PASS: TestNetworkPlugins/group/false/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rhm85" [b4e53792-ff29-4ef7-93ff-bdebb7cec77e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.0333079s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (3.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-852700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kindnet-852700 "pgrep -a kubelet": (3.9290766s)
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (3.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (26.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-852700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-l5pkg" [97b60ab2-c9fd-465f-bdf0-73e0ddd3d669] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0706 21:56:30.536476    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 21:56:31.271119    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-l5pkg" [97b60ab2-c9fd-465f-bdf0-73e0ddd3d669] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 26.0149129s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (26.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-852700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (181.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperv: (3m1.6484396s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (181.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-th5wc" [f25e60d5-8d0a-4919-96c7-3f89c2bdb705] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.0423958s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-852700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p flannel-852700 "pgrep -a kubelet": (4.1789877s)
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (4.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (16.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-852700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-cxj2h" [79a9bda0-237d-4209-8470-dfa8ed4a4318] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-cxj2h" [79a9bda0-237d-4209-8470-dfa8ed4a4318] Running
E0706 21:58:07.539304    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\auto-852700\client.crt: The system cannot find the path specified.
E0706 21:58:10.000158    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\default-k8s-diff-port-730700\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 16.024148s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (16.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-852700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (174.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperv
E0706 21:58:35.352095    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\auto-852700\client.crt: The system cannot find the path specified.
E0706 21:58:37.816139    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\default-k8s-diff-port-730700\client.crt: The system cannot find the path specified.
E0706 21:58:57.447537    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:58:57.463528    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:58:57.479671    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:58:57.511529    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:58:57.559543    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:58:57.652886    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:58:57.823080    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:58:58.154856    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:58:58.797170    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:59:00.080874    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:59:02.648556    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 21:59:07.776620    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperv: (2m54.5769133s)
--- PASS: TestNetworkPlugins/group/bridge/Start (174.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (153.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperv
E0706 21:59:59.876961    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 21:59:59.891813    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 21:59:59.906864    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 21:59:59.938998    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 21:59:59.985786    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 22:00:00.080409    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 22:00:00.252953    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 22:00:00.578567    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 22:00:01.233646    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 22:00:02.519269    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 22:00:05.094438    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 22:00:08.376787    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
E0706 22:00:10.226004    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 22:00:19.486635    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
E0706 22:00:20.477580    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 22:00:36.520869    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\calico-852700\client.crt: The system cannot find the path specified.
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-852700 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperv: (2m33.4061236s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (153.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-852700 "pgrep -a kubelet"
E0706 22:00:40.963245    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p enable-default-cni-852700 "pgrep -a kubelet": (3.6744658s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (3.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-852700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-kbghl" [caf77213-d2f6-46cf-88dc-c19238699266] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-kbghl" [caf77213-d2f6-46cf-88dc-c19238699266] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 16.03208s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (16.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-852700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-852700 "pgrep -a kubelet"
E0706 22:01:31.278535    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ingress-addon-legacy-927000\client.crt: The system cannot find the path specified.
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p bridge-852700 "pgrep -a kubelet": (3.6647663s)
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (3.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (15.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-852700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-mnc6m" [9eeb7351-bf39-47e8-81f3-670f3769ff29] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0706 22:01:35.962022    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kindnet-852700\client.crt: The system cannot find the path specified.
E0706 22:01:41.418397    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\custom-flannel-852700\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-mnc6m" [9eeb7351-bf39-47e8-81f3-670f3769ff29] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 15.0550251s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (15.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-852700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-852700 "pgrep -a kubelet"
net_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe ssh -p kubenet-852700 "pgrep -a kubelet": (3.714851s)
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (3.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (15.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-852700 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-c4txv" [5a979d68-9d54-4658-9766-1e5a8fd8e8c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0706 22:02:37.416129    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\kindnet-852700\client.crt: The system cannot find the path specified.
E0706 22:02:43.863833    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\false-852700\client.crt: The system cannot find the path specified.
E0706 22:02:45.876137    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
E0706 22:02:45.904489    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
E0706 22:02:45.921374    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
E0706 22:02:45.948316    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
E0706 22:02:45.990565    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
E0706 22:02:46.081616    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
helpers_test.go:344: "netcat-7458db8b8-c4txv" [5a979d68-9d54-4658-9766-1e5a8fd8e8c5] Running
E0706 22:02:46.264955    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
E0706 22:02:46.592955    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
E0706 22:02:47.241784    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
E0706 22:02:48.546119    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
E0706 22:02:51.123165    8256 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\flannel-852700\client.crt: The system cannot find the path specified.
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 15.0271012s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (15.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-852700 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-852700 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.39s)

                                                
                                    

Test skip (29/302)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-121800 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-121800 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 5648: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-470000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-470000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p disable-driver-mounts-470000: (1.2960388s)
--- SKIP: TestStartStop/group/disable-driver-mounts (1.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (14.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-852700 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-852700" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt
extensions:
- extension:
last-update: Thu, 06 Jul 2023 21:23:12 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: cluster_info
server: https://172.29.64.39:8443
name: cert-expiration-861000
contexts:
- context:
cluster: cert-expiration-861000
extensions:
- extension:
last-update: Thu, 06 Jul 2023 21:23:12 UTC
provider: minikube.sigs.k8s.io
version: v1.30.1
name: context_info
namespace: default
user: cert-expiration-861000
name: cert-expiration-861000
current-context: cert-expiration-861000
kind: Config
preferences: {}
users:
- name: cert-expiration-861000
user:
client-certificate: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-861000\client.crt
client-key: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\cert-expiration-861000\client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: 
* context was not found for specified context: cilium-852700
* cluster has no server defined

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-852700" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-852700"

                                                
                                                
----------------------- debugLogs end: cilium-852700 [took: 13.6311708s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-852700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-852700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cilium-852700: (1.3450292s)
--- SKIP: TestNetworkPlugins/group/cilium (14.98s)

                                                
                                    
Copied to clipboard